Monday, June 19, 2023

Use functional programming concepts to increase quality when re-factoring

Sometimes when you want to implement a new feature to a system the developer needs to figure out if this is new functionality, or a modification of the existing. How is this going to fit in, or is it new on its own? Chances are you have parts of the functionality there already but it be a mess. There are functions being called from other components and doing mostly the same thing, but in a slightly different way. This is a great opportunity to refactor this behaviour to one abstract type so you can add some automated tests, fix the design issue (without fixing everything) and leaving it in a better place than you found it.

By using some function programming basics your code can read and perform better than when you found it. By recognizing these patterns and the value they bring, you will find yourself recognizing this situation in many places and it will become easier to have smaller refactors that don’t slow you down on your current estimate, but give you the peace of mind that you will save some time down the road.

This will cover (IMHO) the basics of functional programming that benefit any codebase; iterative functions, OOP classes and any ball of mud you run into

Immutability

Immutable parameters don't change the value internally. use this value to create a new one
this is referred to as ‘side effects’

This is essential when doing concurrent programming. The data you put into that thread can’t have a hard reference (or any ref) to the global value.

By default, try to keep inputs immutable. Be efficient at copying the data you need and returning a new set (or reference to that set). You can enforce this in Java with the final keyword. In python, you can use tuples to pass arguments into functions (instead of lists)

Functional Composition


When you have many functions in a file, the callee has to know all of the details of the internals to be able to create the behaviour desired by combining the function calls in the correct sequence.

This really violates basic encapsulation and leaving this will cause some messy coupling between components and pieces of code will call specific functions. Easier to provide an interface that has specific behaviour. There is a parallel here with the Facade pattern in OOP, provide a simplified interface for the behaviour of the component and then refactor all the reference to that functionality to the new interface. This is a great re-factoring strategy, to add some functional design to your classes. Use abstract data types or interfaces to enforce the behaviour of the internals.

Higher-order functions


Using higher order functions in any language that you are using is really important. Why? because you end up making these functions anyway. So, you can create one of your own making with many loops and callbacks, or just use the built-in versions that come with many languages; or are available as a library to extend the languages base functionality.

The most common are using map, reduce, and filter

  • Map - apply a function to all elements in the list. The list won’t change, but he values in the like will
  • Filter - remove elements from the list that don’t satisfy a certain condition
  • Reduce - apply a function to each element, and combine all results into one value
Immutability in these functions. 
When using map, the elements in the list will change, so these are mutable elements. For Filter and Reduce, you are reading the elements and producing a new output. With these immutable values, you could use partition the list, and use concurrency to run filter and reduce in parallel.

Monoid

This has an obscure name; but its really some very fundamental algebraic rules for designing functional behaviour. Monoids have properties and its a good idea to follow these as a guidelines when creating the functions that are used/applied in the higher order functions like map, reduce and filter. 

Closure - types inputs is type output. When you pass in integers, you have integers returned.

Identity - depending on operation, the base value for type
  • for addition: 0  (1 + 0  = 1)
  • multiplication: is 1  (1 * 5 = 5)
  • concatenation is empty: "this" + "" = "this"
Associative
  • a + b = b + a
  • a * b = b * a

Recursion vs Iteration

Recursion can improve readability, but it comes at a cost of execution. you can exhaust the resources of your stack pretty easily when dealing with large datasets. Use iteration when possible for more predictable linear nature of the execution


Conclusion

By using some concepts from functional programming and being able to see these patterns emerge when refactoring code; the quality and consistency of the refactored result will increase in quality. This is from using tried and true concepts that, like many patterns, will just emerge from your work and add consistency so your teammates will be able to more easily read and maintain the code.





Thursday, June 15, 2023

Maintainable and Portable codebases increase developer velocity by reducing complexity

When teams are developing the same codebase, teammates will bring their own opinions on best practices for code formatting and quality. Everyone has a lot of experience and curiosity, so its good to use that knowledge to implement some simple practices to allow the team to move faster together, with more quality.

Starting with some simple tools that implement Maintainability and Portability checks; we can automate these actions to really enable development productivity. The collaboration on defining these and evolving these quality attributes will have a significant effect on the quality and velocity of your projects. 


Portability is the ability of your project to work on your local machine, and different testing and production environments

Maintainability is the quality attribute that helps other people on your team work on the same project and be able to add value without too much overhead to understand how it all fits together.

Automation
Automating small problems help the developer concentrate on the bigger tasks. It always helps productivity; less time and more consistent than manual checks. 

Maintainability

Common code formatting and structure helps team members understand each others work. This results in a more effective PR that can be reviewed and merged easily.

For a first version, have a code formatting and lint tool. The output is much easier to review when the formatting doesn't need to be understood. Running a linter on your code will always gives me more robust code and reduces complexity.

Type safety

For dynamic languages like python and javascript, a type checking action is really handy to increase code quality and make more robust functions and classes. Using type hints, a tool like mypy will check the validity of the variables being used to prevent TypeError exceptions. For javascript, we have TypeScript now that adds a lot of type safety over top of vanilla js. 

Automation


Python
  • run pre commit for everything,
  • use the same hook to run on main branch on the repository when merging pull requests

Node Typescript
  • Prettier
  • ES lint 

Java
  • Prettier
  • Java compilation when building the byte code files will do the static type checks due to the nature of the language

Using git for doing the tasks

git is an amazing piece of software engineering for source control and versioning. Also it will run tasks or 'hooks' to run specific functionality before and/after a commit. 

Maintainable Structure

When packaging application code for development, there is the application, and then there is all the artifacts around the application. Data files, unit tests, different configuration are need to build the app, and then there is the app code

YAGNI: Beyond not putting test data into production, it really helps reduce complexity to build the leanest version of the codebase for deploy to production. This helps quality by reducing complexity of the artifacts used to debug when something goes wrong, or to add a new feature.

Maintainable structure is more about the functional nature of the application. The programming language to implement the system may use some common conventions, but its still an application that can be decomposed into a set of features

package by feature

Many applications have a directory structure that reflects the implementation pattern of the system architecture and not so much the functionality that it has. Directories like "Model, View, Controller" and "Api, Data, Logic" or some variation of what the structure of the components are. 
This gets complex as I would have to add and edit files to many different directories to add a feature. The new SearchCatalog feature would need some changes to a file in api, or a new file and so on with the different parts of the pattern. Not all features are implemented with the same pattern, so having a strict structure can cause issues.

Instead, try packaging by the feature itself. In a team this really helps as the teams are really operating in their own part of the larger system with better understanding of the coupling between them.

Lets make a feature called "SearchCatalog" and implement your pattern for the data models, views and logic classes that make the feature work. By better understanding the interfaces, this makes understanding the dependencies a little easier, and can also result in nice shared libraries that are used by many features.

For an API endpoint that uses some logic to read and write to a datastore; the pattern is a basic robustness pattern 

Responsibilities
  • request/response boundary
    • parameters
    • validate request
    • serialize response
  • logical objects that have behaviour
    • called from boundary object
    • unit test these with objects
  • data entities
    • model and model behaviour
  • workers, caches etc
    • tasks and clients for dependencies

Lets say we make a python flask api, by using these in files called "api.py, search.py, data.py, clients.py" give the reader an easy time to understand whats in the directory and what it does

Test

By having a functional test with an http client you can test the api boundary objects and for any logical classes and components the tests become clearer because they test the functionality.


Portability


When developing an application there is going to be a need to build the app locally, and deploy to your production environment for the users to use the app. There could be testing, staging, and other environments along the way. Enabling some good practices for the configuration of the app makes developing and testing a lot easier and reduces the stress of a deployment.

  • A new person should be able to pull, run tests, start app with just .local.env, and can hook into secrets in a vault when in dev, staging and production environments
  • Keep environment config versions and use the environment to use a specific file.

Dependencies should be mocked when developing locally, or you could end up with some side effects like incurring charges for an api every time you run your unit tests!

Automation

Use pipelines in bitbucket or github to automatically deploy your main branch to a development server, for larger production deployments there is Harness and other tools can deploy to many nodes to reduce complexity of setting up many production instances.

Test it

How do you validate? use a automated test to add a quality step to your deployment scripts. The deploy is finished when its validated, not just deployed

Testing dependencies config

No matter how careful you are about the portability of the app, things are going to happen with many people working on the application. Using more than a developing and production environment will go a long way to ironing out the issues before deploying to production

To go even safer, its a good idea to use your load balancer in production to route to validated instances. Enable this by deploying to a new node and smoke testing the live production instance before adding it to the load balancer. 

Conclusion


By automating some good habits standards, a nice safety net is created that allows a flow of increased productivity. Group culture is positively affected as the standards are agreed and evolved over time. More time is spent adding value to the product, instead of working the codebase. 




Friday, June 09, 2023

Consistent Software design enables quality and schedule predictabilty

All forms of technology have an architecture. If that design is intentional, it can enable velocity as its understood what is being made. if its organic and coincidental the vague plan will result in confusion, unpredictability and lower quality outcomes

Why software design?


A coherent design gives a clear understanding of the structure and behaviour of the system. How complete does this design need to be? For civil engineering, the blueprints that are required to build a structure are quite detailed. Early software design followed this expectation of detail, but it was found that created detailed designs before implementing the system really slowed the process. This became known as "Big Design Up Front" and was such an impediment to completing a project that the concepts of design were seen as slow and left behind for the sake of velocity.

Where did that lead us? We ended up making big balls of mud really quickly, and then took twice as long to debug this pile and put it into production

There is a happy medium in the middle here; where Just Enough design provides the clarity of the finished product, but doesn't get lost in the details. Its become clear that asking for a clear estimate (When this is done) is really problematic before knowing What is being made and How it gets built.

But we are in a hurry....

When creating products and keeping promises to your customers and yourself when it come to delivering these products. It natural to imagine you can produce more than the time and capabilities allow. Given the rush to deliver, the ‘status’ of the feature that you are delivering becomes the most important question. Whats blocking this from getting done? In many cases, the definition of WHAT you are making isn’t clear. With this not clear there is still more investigation, experimentation, and conversations that need to happen before its known what is being made.

With a clear picture of what is being made, the clarity of HOW that thing gets made is clearer. The dependencies get understood, and the complexities are known, so the planning of making that are known. At this point, the predictability of the plan becomes a little more solid. This can enable a decent estimation of the work.

What is produced?


An understanding of the behaviour and structure that satisfies the simplest requirement. This works on a whiteboard, and I have built many projects with a few people in a room with a whiteboard. This worked as long as the group was small and the deliverable was the groups alone.

As the product scope and resulting dependencies increased, this informal process didn't scale well. There were always a need for a meeting to clear things up. This gets even more unwieldily in the world of remote work and remote meetings. The need to write things down and operate from a common source of truth is clear.

By having a standard for design, the consistency of this will start to enable better decisions as the clarity of the context will become clearer for the implementation. 

Formalizing software design with a standardized template, but keep it really simple. Functional Specs, and ADRs to give the requirements a solid foundation that can be implemented. 

How can design be a lightweight process? By never finishing it, or having the expectation that the design is complete and a deliverable on its own. Its purpose is to clarify what the real deliverable is; the implemented feature that has value for the paying customer.

Who is involved?


You need the group to build something significant. Efficient architecture is the result of efficient group communication. Good communication enables velocity when the outputs are recorded and the context is known. Collaborate as a group. Silence probably denotes confusion, not consent or agreement; so this can take a few tries before people start talking. 

So what works?

Architecture meetups on the scrum team level. 
  • This isn't a planning or status meeting, its a discovery meeting. The group needs to discover the value of this feature and the dependencies around it. How does this new piece of functionality fit into to the larger picture?

Working groups on the company level. 
  • For the quality attributes that are cross-cutting concerns (Usability, Performance, Security, Maintainability) its a very efficient way to formalize some basic concepts and enable some consistency the technical roadmap.

In general; Writing things down and being objective about goals will align the group. Evolve your template for a technical spec, and keep removing as much detail as possible. Complexity in the documentation works for consulting, but its the enemy of efficient design. Keep it clean and revisit often.

Conclusion

When building a product and operating with velocity, there can be a tendency to not formalize the design for the sake of moving faster.

This can result in early wins for the demo of the implementation; but if the debt incurred does not get addressed the complexity will slow progress of the project.

Communicate and collaborate as group on What you are making. When this becomes clearer, the process of how that gets built will become more predictable. This predictability makes estimation of effort possible, and the next deadline will be less of a guess and more of a plan


Thursday, June 08, 2023

Effective Software Architecture meetings

For an effective architecture strategy to develop; there needs to be some alignment when meeting together. In meetings with no agenda or focus it can seem to be a waste of time. It is. if the meeting invite is “we are going to brainstorm on everything” then meeting fatigue will set in and participants will have less engagement.

Its also essential to be clear that this isn’t a status meeting; the status is on the board, backlog and roadmap. Those are the outcomes of these meetings. The status should be visible without a meeting.

The common goal/output of the meeting is some decision on how to move forward with alignment of the participants. To set the theme and get an output of the technical meeting to be an effective one; its essential to understand the context of the meeting, and the participants.

Problems and Solutions

How to enable problem solving and know the valuable problem is being addressed? How to enable decision making so that we have a solution to work with and con move the understanding of the problem and our technical solution forward. Communication is a lot of listening and focusing on context

Focus on Context and Outputs

Meetings and the audience

For any meetings you have; take a moment to understand the Goal of the meeting, and from that it should be understood what you are working with here. Its also nice for any participants to know the context if they have some topics to bring up, or to be a listener in the background; or not participate at all

Discovery

Product discovery meetings are key to understanding user problems for features; and the deeper understanding of the technical constraints for this new feature.

These could be from ideas from business, dev, product, or triaged from a customer issue or suggestion; but the rationale for adding or modifying the architecture should be rooted from something that creates value in the product.

These output to the Roadmap level, product or technical; and sometimes need a prototype or more investigation to qualify the actual goal (and motivation).

Outputs

  1. Valuable problems
  2. Feature value
  3. Market direction
  4. Business Impact

Audience

  • Product
  • Sales
  • Support
  • Development
  • QA

Planning

You have discovered the valuable problems and understood the value solving that problem will bring. Now its time to plan the technical solution; to creating the requirements and design for system features. 

User story mapping is really helpful as this stage as it uncovers the quality attributes to support this new feature. Also at this time trade-offs are made, so be mindful of any debt you are going to put on the books. Debt is useful, but like any debt it will overwhelm you if not kept in check.
As you go a long, you will learn more, so this meeting works well as a regularly scheduled meeting. In agile this is usually a "Backlog grooming"

Output

  • Backlog items for functionality
  • Technical spec
  • Architecture Decision Records (ADR)

Audience

  • Development
  • QA
  • Product

Implementation

The backlog has items for the functionality and the tech spec has the designs and decisions made. Lets build this thing! This is usually called a sprint onboarding, a kick-off meeting and gets the project people involved. This is when estimation can become possible with the details of what is being made now down on paper.

Outputs

  • implementation Epics for some definition of done
  • interfaces for implementation
  • how requirements build acceptance criteria

Audience

  • Devs
  • QA
  • project managers
  • product managers

Release

Are we releasing work to support a feature, or is this a new one? Release meetings can take on many forms. and again depends on the context. A demo is an internal release and what is in that demo can be used to form the external release. For the technical architecture, the items can be lost in favour of the functional product notes; but its important to keep the architecture in-step with the release numbers you put on the deliverable

Version the architecture with the release

Output

  • in the release notes, link to latest requirements that created that version
  • Link docs describing features and acceptance criteria
  • diagrams of structure
  • mockups of UI and flows

Audience

  • product
  • QA
  • devs
  • devops
  • Support
  • Sales

When the meetings happen

There can be a tendency to get people involved, but having the wrong people can really hurt the output. Keep the focus to the group affected. For most architecture decisions its usually the team making the feature, when they plan to do the work. 
For architecture decisions on cross cutting quality attributes (Usability, Security, Performance, etc) its important to have a working group that focuses on setting the policy (a good checklist) for the functional features to get built on

Conclusion

When making the context clear and involving the relevant participants; architecture meetings can be a reliable way to output relevant decisions. Its essential to know What we are making so we can figure out how we make it and when it happens.

By taking a moment to understand the context and the problem to be addresses; the actual people that are needed to participate will become clearer and the chances of a meaningful solution will increase.


Friday, May 26, 2023

Micro-services pattern. good and bad.


Some patterns get a lot of press, some don’t. Microservices have enjoyed a good run so lets review the good and bad. Like any architecture challenge; using a pattern is really useful in the correct context, but seeing a pattern as a solution for all problems can be really problematic.


the good. 

Microservices can be taken on by a single team, scaled to handle a specific responsibility of the architecture. Services like Auth (auth and accounts), Notifications, Webhooks or other event handlers are usually good candidates to exist as a service


the bad. 

When not being pragmatic, the developer will apply a pattern for the sake of it, in the hopes of getting the benefits. This has led to extremely complex application, where the networking and added infrastructure can lead to slow and buggy applications


take-away. 

Be pragmatic; understand the problem you are solving and the pattern for implementing the solution will become apparent.

Focus on the functionality of the architecture and the other system attributes of performance, security, portability and maintainability will let you know when you want to split out a feature into its own service


A little history


The idea of decoupling functionality into distinct services has been around for about 20 years now. Web Services as a pattern started showing up as the networking and standardization of the protocols started to mature. 


Early networking was done with unique and proprietary protocols, like IIOP and CORBA. Once XML became an standard and the speeds of the infrastructure caught up, the basic setup of XML / HTTP became the common standard. This has now evolved into the JSON / HTTP we use mostly today. 


Smaller web services


Netflix was growing so fast that grouping teams to the functionality and making the deliverables from the teams into distinct services showed a lot promise. It seemed like a team of 5 could handle the functionality for a major software feature, and these big features can be deployed independently and they would communicate to form the product


From this success, the term ‘micro-services’ became a buzzword, and engineering departments felt the need to keep up and adopt this pattern.


Scaling a specific feature


You can always put your monolith app into a kubernetes setup and scale it up. For any service that in the API pattern, where its a stateless processor of requests; this should work fine. 

The trouble can start with other services that have more state and are used differently.


There has been some clear benefits to isolating specific features in the architecture.


Auth (auth and accounts) 

  • this is a high traffic service that needs high quality. using a microservice to handle all the details around identity and permissions, and be able to see this scale. I have seen an auth service running in kubernetes for a very large e-commerce site. On the black friday shopping event, this had over 4200 nodes running!


Notifications 

  • Sending out notifications, and dealing with the dependencies to do that work requires a lot of configuration and specific behanvior.


Webhook or other event handlers. 

  • This type of service is handling a lot of incoming requests and need to have specific configuration and infra.



Too many services


When applying micro-services to any problem, there can be a real problem with the associated complexity. This is sort of the same affect of having many small required libraries in any framework. The whole thing starts to bloat and in the end you could end up wth a monolith of services.


Also, the dependencies between services and the teams developing them will produce a lot of overhead. This can really slow the velocity of an product cycle.


Performance of service based architecture can be improved on the networking side with protobuf and other lower level protocols, but the state management is always going to be a complex problem. How big of a challenge do you need this to be?



Let the problem determine an effective solution. 


Patterns emerge from the architecture, applying patterns for the sake of it just adds needless complexity. Focus on functionality that brings value to your product and as you break down the dependencies and understand your architecture the need to split a specific piece of that functionality into a container will become obvious. 



Being an effective developer in a company


Lessons learned when developing software in a company:

Add value

  • Add value. Look to solve problems, not just build stuff. 
  • Be Pragmatic.
  • Be a professional; ego trips and flame wars have no lasting value, professionalism does
  • Learn, there is always a better way.
  • while(developing) { Have a personal process and refine it }
  • keep a journal/log/todo at all times. keep a running queue or stack (depending on how you organize)

Be a good teammate

  • don't try to keep everything in your head, put designs and thoughts in paper to formalize.
  • Extremely high rate of return on information sharing. Collaborate.
  • Utilize tools so you don't have to make new ones that do the same thing.

Specs

  • Implement to spec. If so spec exists, document. The document is now the spec.
  • Writing code is part of the solution, after the problem is understood.
  • Write to the API before creating it, make it something you would want to use.

Quality

  • Time to do it right. If the time doesn't exist now, when will it?
  • Are you making good software, or just trying to make some software work? Making a quick patch just pushes the time to do it correctly into the future. Fixing will make the problem go away forever.
  • There is always work to do in areas of performance and code clarity/quality. Schedule this as part of the time spent.

Complexity

  • Aggressively attack complexity or it will catch up with you.
  • Don't be too clever for your own good if clever adds complexity. Be clever in simplifying. Complexity will cause brittle code and crusty developers, avoid needless mental work in code.

What math is needed for software development?


We sometimes see the articles or comments that you don't need math to be a good programmer.
The thing is, software development has its roots in computer science, and cs is applied mathematics. So, it is true that you don't need math to do software development, it just makes it a lot easier if you do. How good do you want your work to be?

In the more broad spectrum of software development there is so much to do. Planning, designing, reviewing, etc. Do they have the design skills to make it look good? Do you have the empathy skills to know what the customer wanted in the first place? These don't seem related to math at all; but in pulling them all together to create a solution you would benefit from the problem solving skills that math provides

What actual math is needed for programming?

Computer Science is Applied Mathematics, so to be a computer scientist you would need a strong mathematical foundation. For making software, you don't have to be a full-blown computer scientist, but first you would have to use and understand the logic and data involved and how these two things work together to create your program.

One cannot do computer science well without being a good programmer, nor can one understand the underpinnings of computer science without a strong background in mathematics. Education must make students fluent speakers of mathematics and programming, and to expose them to both functional and imperative language paradigms so that they can effectively learn computer science.

Early programming courses and discrete mathematics will articulate the strong ties between mathematics and programming. Then the coursework should bridge the gap between the mathematical perspective and the implementation of an algorithm as a sequence of instructions through an imperative language.

I have thought of programming as largely a combination of set theory and predicate logic. Category theory may be a better way of going about the first part, as it is really set theory when combined with functions. I'm not sure if that replaces logic as much as it extends it. I'm really seeing more light in functional approach as the way to glue the concepts together.

    How to get there


    The first year programming course should not be viewed as computer science in it's entirety. It is a formal language and propositional logic course, which is a foundation aspect to CS, but doesn't represent the entire profession.
    • set theory
    • predicate logic
    • combinatorics
    • probability
    • number theory
    • algorithms
    • algebra
    • graph theory
    • Understand sets and how regular algorithms apply to them
    • Functions ,Transcendental functions, including trigonometric functions, logarithmic and exponential. Algebraic vectors. Combinatory logic is the root of lambda calculus 
    • Computability and Turing style computer science is a bit at odds with formalism. It's a different philosophy of the nature of mathematics

    Logic - first order logic http://en.wikipedia.org/wiki/First-order_theory theory of computation second order logic and computational complexity np complete also need to understand relations: unary, binary, ternary, n-ary important for iterations over sets with algorithms, check out STL style of applying functions. relational data

    Numerical methods for solving simultaneous linear equations, roots of equations, eigenvalues and eigenvectors, numerical differentiation and integration, interpolation, solution of ordinary and partial differential equations, and curve fitting.


    Wrap it up

    Math is the language of technology and the computer science is applied mathematics. The more you know about these foundations allows you to make better software.




    Communication between web pages and server applications

    Web sites were originally intended as a tool for reviewing academic papers, but have evolved into a platform for applications that people use every day. That is a big change, and to enable that change software developers have created many tools and discovered some patterns along the way. Some tools and patterns  have worked well; and some not well at all. Here I present some findings from making web application over the last 20 years. Lots of learning in that amount of time!

    This is part 2 of a set of posts; so this article assumes you can make a web page with HTML and javascript; but have a hard time understanding how that page works with a server, api, and not sure about the http, ssl, or other terms that you have heard along the way.

    What are we making?

    For this example we want something simple, but has some logic to understand how the pieces work. For this we have a fictitious client named Terry. Terry works at the weather office and needs a web app to track weather data, like temperatures. Terry needs a weekly report for this weather data, and is waving around pile of money to get it done. Let's get this done for Terry. 

    How are we making it? 


    We are going to use Python to accomplish this task. There are many choices when it comes to web servers and application are built in various languages (Java, PHP, ruby, javascript) but we are going to use just Python to cut down on the complexity of setting up a lot of tools

    Servers
    First lets understand how the page in your browser interacts with 'servers'. When you type the address in the browser you are requesting content from that address. What does that mean? It's really the same as opening the file on your own computer, and you notice in the address bar the difference; opening the file locally has the address starting with file: and opening the file from a server has the address starting with http: What are those indicating? They are the protocols the browser uses to get the content. Files are local to your machine when you are using your machine, so the 'file' protocol is used to get that. Files served from web servers is content from another machine, so the browser uses the 'http' protocol to get those. This stands for 'hyper text transfer protocol', and it is exactly that; a protocol to transfer hypertext files around. I suppose the 'file' could be renamed "hyper text file protocol" but htfp isn't that descriptive, so file is used.

    Whats after the protocol? the name of the domain you are requesting the resource from; this is whatever.com or your favorite site

    Start a web server on your local machine, and get the file with your browser from that webserver, instead of your file system. You get to use the http protocol to do this, and you use the domain name of your local machine. This is the 'localhost', but as always that name maps to a IP address, and in the localhost case this IP address is 127.0.0.1. All domain names map to IP addresses. This is the internet magic!

    Use apache or nginx or any webserver to do this; here is a handy link to get started with the simple http server in python: https://developer.mozilla.org/en-US/docs/Learn/Common_questions/set_up_a_local_testing_server

    Methods
    HTTP protocol has a number of methods to interact with webserver to get some content from the server or send some content to it. These are requests and responses, and in http we will cover the most commonly used requests; GET and POST.
    GET is what is says; I want to get some content, and POST is I want to send some content. If you type your address in the browser its a GET request, but if you fill out a form in a page and press save; you are probably sending a POST request.

    The vast majority of web and application requests are GET requests, and those just fill your browser with HTML text content. Sending data to the server is a different story so lets dive into details of the POST and how to use it to make web applications. Use your html skills to create a form element, and inside that put your input field and submit button. When you POST to localhost, you are sending a request that contains all the info about your browser, and the value of the field you just filled in. How do you handle that with the server?

    Flask
    The application server we are going to use here is Flask. There are many different ones for many different languages and this isn't a comparison post, but instead an effort to explain the basics and apply some fundamentals to get you on the road to web development nirvana.

    http://flask.pocoo.org/docs/1.0/quickstart/#http-methods

    Web pattern 1.0


    GET the page and POST your data. The server will get the data from the POST, but what happens when you hit refresh on your browser? The same data will POST again. This can be an issue, since the user just wanted to send the data once, so in your boundary object the good practice here is redirecting to a GET request, so the data is saved and you don't have the issue of re-submitting data. This gets us further, but 2 problems now remain.

    1. To show everything on the page you have to get all data to do that, plus any changes to the data from the POST call
    2. The experience from the user isn't so fun as the browser is constantly getting redirected, causing a 'flicker' effect when dealing with many pages.
    So, wouldn't be nice to just leave that page in the browser, and POST the data some other way? This was the impetus for the technology pattern we know as AJAX. AJAX lets you POST using javascript and get the response from the server through the same function. Browsers after the late nineties supported this feature.

    Back then you had to get creative. I made a Java applet back in 1998 that was very much an AJAX app. It was only about 60kb and it just had the basic networking to get and post xml text to the server. Another applet was the toolbar in a frame, which switched the pages in the main frame. It was a reporting tool for a server side application. The applet interfaced with the javascript in the browser through a JSObject object from netscape. Getting this Java applet to work in IE on the Mac was quite the trick, but the experience using the page was very slick. About that time the httpRequest object became available in many browsers, and AJAX became a widely accepted pattern; and spawned a new buzzword: web 2.0.

    Patterns learned

    GET to POST.

    • Use Get to initialize a view of html, but don't pass data to the server using GET (unless its a temporal piece of data like a token)

    POST redirect to GET. 
    • When POSTing data, redirect the users request to a GET request that confirms what they posted and prevents resubmits from the form that aren't necessary.

    Web pattern 2.0


    With the advances in web page development as a graphical platform, and the underlying AJAX technology, web pages started to feel like desktop applications. Plug-ins like java applets and flash were taking hold, but developers were seeing how the html/css/javascript basic web technologies were maturing into a package we could build applications in.
    Around this time XML was the cool new technology. It was a way to interchange data on common http protocols, and easy to use because it was just formatted text. The X in AJAX is for XML, and for a few years we were posting XML and getting XML responses and updating our web page elements with those results.
    This was great, but one more improvement was to come. Since we use JavaScript through the browser, and sometimes on the server, the use of XML as a way to interchange data became a bit cumbersome. A JavaScript guru named Douglas Crockford came up with a way to represent all the capabilities that XML has to interchange data, but use JavaScript for that. JSON was born and it's now the better way to send and receive data from the server

    Patterns learned


    POST Json with AJAX and do so consistently for any form. On the server its much easier to deal with consistent data structures, and with this pattern the client and server have a common type of data structure to use. Why didn't the term become AJAJ? I guess that may be more correct, but let's not get caught up in the pedantic details and move forward.

    Security and Performance
    SSL, sensitive info

    keep the payload light, avoid redirects

    Server patterns are next

    Developing web pages with HTML and CSS

    These are notes I have been using to teach building web applications, https://github.com/jseller/repo

    That is a very large topic that is too much for one post. So, for easier learning this has been split into four parts:

    1. Web page development 
      1. HTML, the fundamentals of HTML, CSS and JavaScript
    2. Understanding communication between the browser and a web server. 
      1. using HTTP and and a server API
    3. Building web applications 
      1. Patterns for building web applications that service requests from the front-end and save data
    4. Data and databases
      1. Structuring data and making scale-able applications

    We are going to build a simple web app that explains some fundamentals so you can see how everything works together. First we are going to build a page, and move to the middle tier of logic and the bottom tier of data persistence. These concepts require a much more thorough knowledge of networks, operating systems and databases. We will get there; but lets start with a simple page.


    Front-end

    The web page is the visual 'front-end' of you web application. This is what the user actually sees and interacts with. Its like looking at a house, car or a device. For all the technology that is happening under the covers, it's the first impression of just looking at it that can make or break its success. So, it may be a good idea to think of the web page as the decoration and design of the house interior. The walls have to be built, but it makes the room much nicer if the walls were painted and decorated. This highly visual work demands a view from the design and customer side more than the machine running it. The use of graphics and graphic artistry play a big part in successful web development, but the success is really how usable it is. The experience the user has to solve their problem or need is what determines success. This is the X in the terms 'UX' or 'CX'  that you may have seen before.

    Get Started

    Here we learn how to develop a web page with just enough HTML, CSS and JS to understand the fundamentals. The tools we will use are a text editor and a web browser.

    • First basic HTML, and understanding how a web page is just a document
    • Then add CSS, and learn the browser visually formats the HTML
    • Then add Javascript, and learn how we can add interaction to the site and the basic programming behind responding to users actions

    HTML

    The Browser
    You have a browser on your computer, and this is the window you view the world wide web through. Its Chrome, or Firefox, or Safari, or Internet Explorer, or Lynx.
    1. Start it up and go to this site
    2. View source
    3. Save as a web page
    4. Go to the folder that you save the page
    5. open the file with your web browser.
    Websites are just html files on another computer. 

    The Editor
    You have an editor on your computer; its called Notepad or TextEdit. While these can edit and save documents they don't have a whole lot of extra features. Document editors like Word have many formatting features for documents. For this exercise we are use Sublime Text editor, but notepad++ or Atom are useful in the same way: they highlight the HTML, CSS and JS so you can easily see what parts of the page are for the browser and what is the text or pictures that the user can see.
    1. Start your editor
    2. Open the file you saved
      1. Files have types and this lets the computer know what to use. html files are processed by a web browser.
    3. See the mix of html and text, this should pretty much match what you saw in the 'view source' option in your browser
    4. Flip back and forth between your editor and your browser, so you are using both tools to see the same file. 

    Change the web page

    1. Put your name in the paragraph tags
      1. save the file and refresh the browser
    2. Add a heading. There are different headings, h1, h2, and they get smaller as the numbers increase.
    You can see at this point that the 'elements' you are putting around the text don't show up in the browser; they are there to tell the browser how to display the text in-between. These should be familiar if you have used any document editor before. Paragraphs, headings, bold, italic, etc. This is because the first version of html was made to show documents (actually academic papers)

    Add a list of things

    There are ordered lists and un-ordered lists. Add one to your page

    Add an image 

    Images really make a web page come alive.

    Add a link to another page

    This is really the core concept of the 'web' of pages that we have today. You can tell the browser to load any web page on the web by adding a link. When the user clicks the link the page is loaded.

    HTML Guidelines

    Web pages have come a long way. What was originally just a way to format text documents has evolved into the application development user interface it is today. That's a large evolution in a short period of time!

    To keep up with the the HTML specification has evolved to add many more standard elements. You could actually just use 2 elements to replicate what the standard document elements do, but you need to tell the browser the rules to display them. This is what style sheets, or Cascading Style Sheets are for.
    What elements are these building blocks? they are 'div' and 'span'. Think if the div as the structural element that contains a specific part of the page, and the span as a way to change the presentation of the elements within a div.
    • div as structure, span as presentation
    Also, with the class or style attibute on the elements defining the presentation of the element, there is also the 'id' that uniquely identifies the element in the entire page. This will become very important when we add javascript to the page later.
    • use class to style the element and ids to identify the elements. This way you can change the look and feel of the page by just changing the CSS and not the html

    CSS

    Yes, everything is better with a little bit of style, and your web page isn't any different!

    StyleSheets came around when web pages were pushing past the original paradigm of the text document, and the limitations of that in creating visual experiences. Different fonts, colors, sizes were needed to show the elements of the document in a visual way.

    • add a style section to define some rules for the elements in your page

    Notice we are adding the rules to the page itself, and the page is getting larger, but not too unmanageable. Now imagine a much more complicated website with many rules that could apply to each element. That style section would get too large to easily navigate. To solve this you can (and should) put your style definitions in a separate CSS file and refer to it in your base page.

    Also notice that you can add style as an attribute to any html element and the style rules will apply to just that element. This becomes hard to manage as you will want to change style rules that affect the entire document; and changing many of these definition would be hard to manage when the page gets more complex. Its a great idea to define the rules once, and use the class attribute to declare what style rules the element used.

    • try not to style specific elements, but instead define rule for them separately. its much easier to manage when things get large.
      • Caveat for html emails. For nicely formatted html emails you have to have all the elements and style definitions in the same file so the email readers have a chance of showing the content consistently.

    CSS

    CSS: use ID for structural selectors and class for presentation selectors
    structural elements by id - how the block elements align themselves in the entire document
    presentation element by class - color, margin, border, alignment within the element

    Name style by content 
    dont do: .red{color:red;}
    do .error{color:red;}\


    Wrap it up

    Making web pages is fun, and to make complex front end applications its really important to understand the basic fundamentals of markup languages and the formatting of them for effective application presentation




    Programming Languages: the more the merrier

    What your favourite language? All of them!

    This post has some random notes on random programming languages. I'm just keeping some links from when I have used the various languages for different products and projects.

    I find programming languages fascinating; you can instruct the computer to... compute things.


    Programming language design is fundamentally mathematical. It follows rules defined in set theory and logic (predictive and propositional)

    Every powerful language has
    1. PRIMITIVES (the simplest entities)
    2. MEANS OF COMBINATION (to create complex entities)
    3. MEANS OF ABSTRACTION (treat complex entities as if they were primitives) 

    Languages and Environments

    Does it matter what language used? The only thing that can be guaranteed is that, one day, there will be a better way to tackle a problem. Any problem. The language is a tool to create a solution for a particular problem, at a particular time. It is the best solution at the time, but there will be a better on around soon, so don't get too attached to it.
    There is no "real man's" language either. Some are more difficult to do well, but there can't be any impression that one is a magical language that does everything well. As a consequence of the environment people can find themselves in, they could become fans of a particular language, but they should tread carefully and pay attention, or else they will miss the next change.
     
    It does say something about a programmer when they are willing to learn a new language without having required to from work or school. Programming languages are tools in a toolbox. The more the better. If you only have a hammer in your toolbox, then the only problems you can solve are the ones involving nails.
     
    If programming languages were cars? www.cs.caltech.edu/~mvanier/hacking/rants/cars.html
    Should programmers be language-independent? blog.reindel.com/2007/08/28/should-programmers-be-language-independent lang: http://www.cs.rice.edu/~taha/teaching/05S/411/ lang: http://www.newartisans.com/2009/03/hello-haskell-goodbye-lisp.html A timeline of programming languages: http://www.levenez.com/lang/history.html
    Which are faster? http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all

    Most programmers start programming by using a Procedural language. These are bits of logic, and when one piece gets too big it is split up into various functions to make the whole thing more readable.
    C, Pascal, Shell scripting, BASIC, web page functions: these are examples of procedural languages.
    A good course to take when learning programming is Comparative Languages (or some other similar title). This sort of course requires that a working useful program be constructed with each one of the various styles of programming. A language is picked to demonstrate the use of Iterative, Object oriented, Functional, Prototypical and another that mixes styles. This is helpful to distinguish the styles in a different syntax and semantics.
     
    Essential to this is the understanding of 'static' and 'dynamic' typing (Early binding and Late binding of values). When comparing the two approaches, a deeper understanding of Compilers and Interpreters is gained in the process.

    Static or Dynamic typing

    It's really a system of typed 'things' that you are dealing with: http://en.wikipedia.org/wiki/Type_system
    http://okmij.org/ftp/Computation/type-arithmetics.html
     
    Types are associated in early or late binding Languages will have basic built in types like numbers and characters.
     
    For statically typed languages the programmer is concerned with flow, understanding the machine, thinking assembly, when programming. It is on the onus of the programmer to create the code that can run fast on a particular processor. This is a skill that's hard to find.

    For dynamically typed languages the programmer allows the computer to profile and optimize the code.
    Then isn't dynamic typing better? Well, it can be, but due to the run-time nature of the type checking, there is a danger of the problems only creeping up during run-time. These problems would have otherwise been caught at compile time.
    This risk can be alleviated with extensive unit testing.
     
    Also, to make the system more 'dynamic' most statically typed languages allow type casting which occurs at run-time anyway. You'll get the same problem.
    Interfaces and the separation of the implementation concerns from them is a cornerstone of industrial OO development. Dynamic languages are difficult use in a large team/ shared codebase because of their lack of the interface.
     
    This may only be a matter of communication and documentation. A poorly written statically-typed interface will be a problem no matter what.
    So, it is safer to use a statically typed language, but all the features that have been added to make these type systems more dynamic spoils this by the resulting runtime type checking. There has to be room for both: static interfaces that are safe to use and dynamic typing when needed.
    The inevitability of dynamic languages? www.dotnetrocks.com/default.aspx?showNum=277

    Dynamic Typing

    Lisp/Scheme

    to add one and two:
    (+ 1 2)

    Lisp, at first, may seem like more of a collection of syntax than a language. It's really a great language and environment when you get into it. You can also learn some great strategies that carry over to other environments. It has pedantic regularity (with everything being an s-expression)

    It's pretty amazing that a language invented in the early 50's has continued to grow more productive and popular. It is still lacking in many of the tools for the style of web applications that are needed today, but that's only a matter of time. Though time may be not on lisps side when considering the momentum behind Python and Ruby. Again, it is still around, so we'll see.
    http://www.paulgraham.com
    http://www.norvig.com/
    Scheme is a subset of Lisp, with the intention to be correct and a good base for teaching.

    An Introduction to Computer Science Using Scheme: http://www.gustavus.edu/+max/concrete-abstractions.html

    StandardML
    OCaml: http://www.ocaml-tutorial.org/ http://caml.inria.fr/
    F# http://research.microsoft.com/fsharp/fsharp.aspx http://blogs.msdn.com/chrsmith/archive/2007/11/10/Project-Euler-in-F_2300_-_2D00_-Problem-5.aspx
    http://en.wikipedia.org/wiki/Haskell_(programming_language) http://cgi.cse.unsw.edu.au/~dons/blog/2006/12/16#programming-haskell-intro
    XSL is a language for transforming and formatting XML graphs.

    Static typing and Curly braces

    C, C++, Java, D and others are all Algol derived languages

    D

    Walter Bright has done a huge amount of significant work over the years; writing a compiler is a serious piece of work. http://www.walterbright.com/

    http://boscoh.com/programming/some-reflection-on-programming-in-d-and-why-it-kicks-serious-ass-over-c-leaving-it-died-and-tired-on-the-sidewalk#cpreview
    http://www.dsource.org/

    C++

    C++ is a programming language that is the most mature production quality language in use today. It's my bread and butter.
     
    From a c++ implementation view, understanding how a nice memory managed smart pointer works will help with issues in the various components tying together. If things get leaky, it gets bad in a hurry. You could have an engine like Ogre, with some other components from the operating system (rendering libs like Directx and opengl), and then ties to other stuff; so using Boost and STL will help. Once everything seems great, get a good profiling tool.
     
    STL - this is a such a standard library that is has to be considered as part of the language itself. Use of the algorithm approach on generic containers solves many problems. STL collections, iterators STLPort - http://www.cprogramming.com/tutorial/stl/stlmap.html The C++ Standard Template Library (STL), a generic programming paradigm that has been adapted to the C++ programming language, and is an extensible framework for generic and interoperable components.
     
    c++ http://nuwen.net/14882.html 
    BOOST - you will need a good reference counting smart pointer. The shared_ptr and auto_ptr will come to your rescue. Use boost unit tester
     
    http://www.boostcookbook.com/Recipe:/1235053 http://www.boost.org/libs/test/doc/components/utf/index.html http://sourcemaking.com/refactoring/split-temporary-variable C/C++, C# cheat sheets www.scottklarr.com/topic/121/c-cpp-c-cheat-sheets Linking C++ objects using references the-lazy-programmer.com/blog/?p=12
     
    C++ make everything that can be a const a const, inline small setter/getters

    Scott Meyers : Effective C++. Just read it and do what he says.

    Modern C++ Design: Generic Programming and Design Patterns Applied by Andrei Alexandrescu

    Herb Sutter
    http://www.cppreference.com/

    www.cprogramming.com/
    www.cuj.com

    C#

    C# was created to bring C++ style to the Microsoft (ECMA standardized) Common Language Runtime. The language was created by Anders Hejlsberg, who made Turbo Pascal; my first programming environment.
    It has great libraries and MS support, and many features to allow it to act like a functional structued language.
    C# 3.0 tutoria www.programmersheaven.com/2/CSharp3-1

    C++/CLI

    C++/CLI is the implementation of standard C++ for the Microsoft Common Language Runtime. It's the 'managed' version of C++, so it's has support for memory management that is lacking in traditional 'unmanaged' C++.
    Here is a good intro: http://www.codeproject.com/managedcpp/cppcliintro01.asp

    Java


    Java is a programming language originally developed by Sun Microsystems and released in 1995 as a core component of Sun Microsystems' Java platform. The language derives much of its syntax from C and C++ but has a simpler object model and fewer low-level facilities. Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture.

    Java was created with many features to try to make high-level programming easier. It's a safer, more mellow, C++. While C++ has direct access to the hardware, Java is interpreted in a virtual machine.
    It's really taking its place as the 'enterprise' language, sort of the current standard for most programming. This is reflected through it's use as the most common teaching language in north american universities. As long as the programs don't just teach one language this is fine. However long Java lasts in this place, it has brought many people to programming that would have stayed away otherwise.

    When Java came out in 1995 it came with a lot of hype about this safer style of handling memory allocation and clean-up through the garbage collection process. It is an interpreted language, so instead of compiling it to a specific platform, the code is compiled into an intermediate bytecode language, which then is run on a Virtual Machine. The VM is written to specific platforms. This enables the bytecode to run, unchanged, on many different platform.

    The extra step of the virtual machine running the bytecode made Java much slower than C++ at first, but since then the speed of the hardware and optimizations to the bytecode and the virtual machine have closed the gap considerably. Java is, 10 years on, an industrial-strength langauge now.

    Java has a cool feature called Reflection, where you can analyse objects at run time. C# has this same feature.

    I use the NetBeans IDE that Sun has since bought and is now presenting as an alternative to the Eclipse project. It's familiar, so its fine for me.
    Spring framework uses inversion of control to enable regular objects to be used without a lot of extra required.

    Scala

    Scala is a functional, descriptive language that runs on the Java Virtual Machine. It has many features around concurrency.
    http://www.scala-lang.org/
    http://langexplr.blogspot.com/2007/07/structural-types-in-scala-260-rc1.html

    Groovy

    Groovy extends the JDK libraries and make a great tool for all the engineering tasks around the Java is too clunky to do. It's has great dynamic features.
    //a program to convert a number into a text representation of its digits // 356 should say 'threefivesix' def num = '356' def words = ['zero','one','two','three','four','five','six','seven','eight','nine'] num.each(){println words.get(it.toInteger())}

    Clojure

    Clojure is a Lisp dialect http://clojure.org/rationale

    Pascal

    This language introduced me to the world of programming.

    Mocha/Javascript/ecmascipt

    Fascinating story about how this language came about.
    Use a good library to abstract yourself from inconsistent implementations in browsers. 

    this
    use of the keyword 'this' can cause issues in annoymous functions. You can assign the value of 'this' to another private member, but you also use the built in methods of 'call' and 'apply' to preserve the scope of the contents of 'this'.

    http://www.crockford.com/javascript/private.html http://jibbering.com/faq/faq_notes/closures.html Interesting views on the world's most misunderstood programming language and jslint from Douglas Crockford Javascript and JSON Javascriptkit for comprehensive language reference Excellent free 

    JS in the browser is the heart of the browser platform. Its allows the script to manipulate any part of the document currently loaded. An easy example is to write out some text and have function that changes it:

    Written in a short time by one person it has become the most used language in history. It's much maligned, and has some issues, but it's actually a wonder how it's worked as well as it's had, for so long.
    To encourage some standardization the neutral ecma spec emerged.


    Continuously integrate software and everything around it


    Continually integrating code increases velocity and quality

    While you are working away on a project or a paper, its easy to handle all the modifications on your own, since you are the only author. Things get interesting when many authors need to use the same source.

    Integrate early and often.

    Especially when you have the very first thing working, and you have a unit test and functional test doing the bare minimum; take a moment to integrate with the larger project, commit changes to the branch and generally harden the edges around what is newly build. 
    Once the resulting leaner version of feature is in the build, expanding on that feature is done more effectively as you have the build cycle moving with everything intact.

    Automate the building of the software. Once the scripts are done (and they are way easier than everyone seems to think they are) the project builds itself every night. Why not do this? Everyone is home sleeping, and the computer at the office are just running idle all night. Use that time.
     
    The build script, like all scripts, start small and go from there. The build script should do the basics:
    • Clean the source directory.
    • Sync code into staging area for building
    • Build all source in staging area. Full build too; no incremental, or half build shortcuts. Build dependencies, and any inclusions.
      Build target libs and place into target directories.
      Build binaries and link all necessary libraries.
    • Clean all intermediary files
    • Package binaries and libs for distribution
    • Configure target machine for deployment
    • Check network configuration
    • Deploy binaries and libs
    Each step some reporting can be done and the results archived. The script runs at night when everyone is asleep. The results are ready the next day, and everyday.
    Polish your build!
     
    CONTINUOUS INTEGRATION
    • Integration is the assembly of all the parts to end up with a deliverable 
    • building, testing, packaging, deploying.. 
    • One person doesn’t need an integration effort, but more than one person does 
    INTEGRATION
    • Artifacts are created: source code, and other dependencies 
    • Source from different places has to end up in one place 
    • There are many options, but using a source control tool is the best practice. 
    SOURCE
    • A common, neutral build is necessary. 
    • ‘Works on my machine’ works if you are distributing your machine 
    • A common build machine/environment eliminates local dependencies and gives confidence it can work elsewhere 
    • When to build all this? Weekly? Daily? 
    BUILDING FROM SOURCE
    • Once the source compiles, how do you know it works at all? 
    • Manually running the build and exploratory testing can take you so far 
    • Having unit tests can give more confidence that the code that is tested will work 
    TESTING THE BUILD
    • You can just zip up the compiled source, email it somewhere and run it… 
    • Error prone, it’s easy to forget menial steps 
    • Time consuming as new dependencies are usually found as a result of things ‘not working’ 
    • Run those tests to give confidence that all tests will pass. Tests that aren't in the dependency tree shouldn't be affected. Make unit, integration and functional tests run separately or all at once
    PACKAGING FOR DEPLOYMENT
    • Building, testing, packaging, versioning and deploying define a process 
    • If it is a defined process, it can be repeated. 
    • It’s ‘defined’ if you did it once. Never wait for a master plan to start, just evolve the plan as you go. 
    • The build is ‘broken’ if the entire process doesn’t complete. 
    IT’S A PROCESS
    • Once the process is automated, run the process as much as you can, not ‘when you need to’.
    • Always look to automate any repeatable process. 
    • Run the process continuously and be aggressive about finding more tasks to automate 
    • Why do all that work when a computer can do it for you? Computers just sit around all night and they don’t get tired. 
    AUTOMATE EARLY AND OFTEN
    • Spend your time actually building the technology, not doing the housekeeping around it.
    • Automate the source and change control, the build and packaging, the testing and verification, and the deployment 
    • Only one trigger is needed: when the source changes. 
    • Run on every change, every hour, all night… all the time. Who cares? It’s automated. 
    CONTINUOUSLY INTEGRATE EVERYTHING
    • Continuous integration tool: ’Hudson’ 
    • Source Control: git 
    • Building: using compilers specific to the code base 
    • Metrics: running analysis tools on codebase 
    • Testing: running unit tests on every build 
    • Metrics: running more analysis tools on built codebase 
    • •Packaging: auto deploying to development environment 
    PRODUCT
    • Allows builds anytime that have aligned versions

    Wrap it up


    Continuous integration is a great strategy to automate a lot of quality into your application, and over time you will end up saving a lot of time and increasing the general quality of the technology that you are making.

    Its a process; do a little each week and the results will quickly build up over time.