Technology Cost Savings Versus Technology Investment

Episode 66

Whether it is finding and retaining IT resources, paying for cloud services, or purchasing and maintaining on premises infrastructure, let’s not kid ourselves, technology is expensive. Software critical to the good functioning of organizations are largely SaaS products licensed per user/month; $3 here and $12 there, an administrator has nothing to worry about until every employee in your organization has 10+ licenses of various SaaS products. With technology costs eating up more of the budget, eventually the leadership team will decide that it is time to find ways to cut costs. I want to use this blog to discuss the nuances around ensuring your organization gets the most value from its IT spending, and avoiding the cost cut spiral.

The worldview is dramatically different when organizational spending is classified as a cost versus an investment. Classic school of thought would be to minimize costs through negotiating better terms with suppliers, substitution, or the pursuit of more efficient uses of input resources. Input resources are acquired, then value is added to them through your organization’s core competencies, however the mindset with costs is that exchange is zero-sum. The cheaper we can source inputs, the greater the return for us.

A spoon with coins in the bowl and a potato on the other end is balanced on a calculator while it all sets upon printed financial papers

But ask yourself, is IT spending at your organization a cost or an investment? When we invest, we are performing the same action, but our expectations can be dramatically different. There is a generative mindset, where investment represents enablement, fostering growth beyond a simple value exchange. Applying a cost cutting mindset to an investment would likely lead to a result where there is a lack of resources preventing the initiative from generating a return. There is a complicated system at play here; if an investment lacks the required resources, proceeds to fail or struggle and there is now an even greater incentive to cut the losers and redirect resources. But if technology forms a critical backbone to your organization’s function, one must be very careful of the second order impacts when “investment” is cut. Has anyone experienced IT staff cuts or limited training? What follows is usually a major transformation push, only then to see the initiative get abandoned. Managing technology assets is not easy, but hopefully I can provide some warning signs to be aware of and a better guiding worldview.

Individuals, or a collection of individuals within an organization, cannot be experts in everything. Adoption of any new platform takes time and resources, scaling exponentially with complexity and business importance. Organizations are a mix of people, processes, and technology. Now applying the cost cutting mentality to any of these pillars, (insufficient staff training, little change management or process redesign) it should be obvious that the investment in technology is not going to return value.

Focus on return on investment over total cost. Treat IT spending like one would with an investment in a company, a critical piece of equipment, or real estate. Investments are viewed as generative. The last six months I have been neck deep in Power BI, a business intelligence platform. It is free to use, but Pro Licenses are required by users to view reports shared on the Power BI service. At $12 per user/month for every user in the organization can be a tough sell initially. I look at that $12 not as a cost but as a way to generate time savings. If an application like Power BI cannot save all employees 15-30 min of time per month or create a better decision that returns equivalent value, then it is not being used to its full potential. Repeat the same thought experiment with every other system in your organization.

It is counter intuitive, but the better your organization can use its technology assets the less it will spend on them in the long term. Fewer big bang replacements, fewer critical failures, less downtime, and fewer engagements with outside consultants. Growth and improvement come from an investment mindset, not cost cutting.

Innovation Agility

Episode 65

Technology driven change is an inescapable force in our modern lives. In a previous blog, “Innovation velocity,” I discussed how quickly new organizational knowledge can be derived from ideas validated outside the organization. For those still familiar with their high school physics, velocity is a vector that contains both a magnitude and a direction. The Innovation velocity blog covered the magnitude of change, whereas this blog will investigate direction.

I love auto-racing. In my mind, the ultimate racing machine is the winner of a 24-hour endurance race. It is not the vehicle with the most power, best handling, best driver(s), best pit crew, or greatest reliability, but the best combination of all these traits suited for the track and conditions during race day. The “winning” business model is not the one that can produce the greatest magnitude of innovation, but the organization that is highly capable of efficiently changing its direction of focus.

For instance, an organization suffers from poor innovation agility when it gets stuck on a sunk cost project or idea. Another example is the calcification of business processes, where changes are gated through unnecessary approvals, reviews, and meetings. I am not advocating for chaos; structure is important, so how do we benchmark an organization’s innovation agility? How long does a change of a particular magnitude take? Does a particular change when implemented create actual or perceived chaos? Does your organization miscategorise changes, as “easy change”, is actually moderate or difficult to implement? Ever get the feeling “why does this need to be so difficult?”

LMP1 Le Mans racing cars sit in paddock (metaphor for innovation agility)

Ideally, an organization capable of demonstrating effective innovation agility should be able to navigate most complex changes without generating organizational chaos, much like keeping the car on the track while driving at the limit of the vehicle’s performance. What are the elements within an organization that can influence how well it can stay on track? The strongest determinants for innovation agility are team autonomy, trust, and cohesion. Trust and cohesion are linked between individuals, teams, and the organization. Does management trust employees? Does your organization identify as a meritocracy? Are there intra-team dynamics where employees seek praise and try to deflect blame? Do employees trust the organization enough to take risks or freely speak their opinions? The greater the degree of trust and cohesion, the better teams can operate with autonomy, and then by extension allow the organization to innovate with greater agility.

The idea of antifragility is one concept that can help with your organization’s innovation agility. An idea presented in Nassim Nicholas Taleb’s book, Antifragile, is that some things benefit from shocks, thriving and growing when exposed to volatility, randomness, disorder, risk, and uncertainty. A concept explored in a previous blog was the idea of using disaster days to enhance organizational resiliency. If the pandemic has shown us anything, it is that within every organization is an untapped capacity to innovate when faced with adversity. Instead of waiting for the next shock, use structures like disaster days and the concept of antifragile to develop internal competencies to better handle change. Now race your organization to a podium finish!

Microsoft Certified Data Analyst DA-100 Exam

Episode 64

A Data Analyst enables a business to maximize the value of their data assets. Data Analysts are responsible for designing and building data models, cleaning data, and transforming data, enabling the organization to draw meaningful business value and insights from their data. In order to get the Microsoft Certified Data Analyst certification one must get a score of 700 or higher on the DA-100 exam. It is without question that the Power BI platform with its native integration in the rest of the Microsoft ecosystem makes it the clear leader in the business intelligence solution space. I have always been a data nerd and am grateful for MERAK supporting me in becoming an expert in Power BI and acquiring the certification. This blog will cover my motivations for the certification, experience taking the test remotely, and resources used to prepare for the exam.

Early in my career, coming from a STEM background, Excel was the analytical multi-tool capable of solving many problems. Demonstrable Excel skills were considered valuable and a potential differentiator when I had little relevant work experience. I used Excel to build financial models, facilitate project and operational planning, perform cost analysis, and data capture. Until I discovered Power BI, it was the go-to application if any numerical or organization problem expanded past the napkin.

Scrreenshot of a complicated Excel spreadsheet with relationships mapped out with lines.

Until I discovered Power Query to solve a problem with parsing JSON files, I struggled with the internal narrative that “I was not a coder”. Using the Power Query editor broke down the barrier that allowed me to develop my skills beyond Excel formulas. Prior to joining MERAK, I managed a complex set of spreadsheets that provided ERP and CRM functionality in a video game (EVE Online) that I have maintained for a little over a decade now. At the time, I felt confident in my skills, only to have them shattered after attending a Microsoft Ignite Power Query talk. It was a really humbling experience to be shown how much I did not know in 45 minutes. Combined with the kind mockery from my MERAK colleagues pointing out that using Excel as a database is not a great idea, I set off a major rebuild and a migration into SQL starting a little over two years ago. The transition would not have been possible without support on initial setup and occasional troubleshooting.

There are a couple challenges that come with self-learning. The first is that when you solve a problem for the first time, it is likely not the right or best way, so it is very important to revisit old work and correct it when you know better. This can be very time consuming. Another challenge is that I had no idea when I crossed the lines between being a beginner, competent, knowledgeable, and or an expert (I feel mastery is not possible for a platform that is so large and changes so rapidly). Building business intelligence solutions typically involves working with confidential data, making communication of competency challenging and not everyone will take a custom application built to facilitate activities on a video game seriously. Therefore, the certification both benchmarked my status in knowledge and better communicated my competency externally.

The certification process is going to be different for everyone based on how much seat time they have had with the application. If you have been regularly using the application for a year, and have gone from source data to published reports/dashboards/apps, passing the exam should not be too challenging. The content depth is roughly the equivalent of an undergraduate university course.

It is my opinion that with just course preparation material, it would be difficult to get the certification as there is a fair amount of nuance that needs to be understood when creating a data model. Likewise, first hand experience sharing and administrating reports is very useful, and hard to do if learning on your own; conceptually the topics are not difficult. If you learn through doing like me, internalizing the knowledge without a project to test it on would be challenging.

I already had a complex data ecosystem to work with, but if there is not a clear problem, my advice would be to find a public data source and start building for an audience. Public COVID, economic, weather, or financial data are some sources where rich models could be developed. It is important to be comfortable with M Code and DAX, but you do not need to be a deep expert. If you can make some modifications to M code without the UI and read the overall structure, aggregate with filters, and manipulate dates using DAX, you should have enough background knowledge.

I want to end this blog with my experience writing the exam remotely through Pearson VUE. I feel very fortunate that the remote exam option is available, probably one of the bright spots thanks to COVID. The exam location would have been an out-of-town trip likely doubling the amount of time away to write the exam. They have a very strict protocol to limit the possibility of cheating. Video and audio are recorded, single monitor, and no headsets allowed. For my home office, I primarily use a wireless headset, so I had to use my recording microphone and physically disconnect the extra monitors. Also, no talking to yourself during the exam. You photograph your workspace beforehand, and then verify over video that you have made the appropriate changes. As I had everything on my workstation wrapped up, I needed to untie most cables to give my webcam enough flexibility to scan the entire room. Save yourself the trouble, if you can, and use a laptop in an empty room. Aside from the initial setup, the experience was awesome: instant results and no long drive back home. I will definitely do future exams remotely.

Useful DA-100 Resources

The interplay between business strategy and technology

Episode 63

I’ve said many times in this blog series that every modern business is a technology company. But what does that really mean? How can every modern business be a technology company? Why is technology so important today? Who or what is driving your client or customer experience? How is data informing decision making? What is the glue binding people to the processes that differentiate your organization? How is technology shaping your organization’s strategic planning process? This blog will explore these questions.

What is business strategy? It is the plans, actions, and goals that outline how a business will compete in a particular market, with a product or service. There are classic frameworks that help facilitate strategic planning. The most foundational are corporate statements: Vision, Mission, and Purpose. It is common for the interpretation of these titles to be interchanged, but in short, why does your organization exist and what does it do?

Another layer in the strategic planning process is the examination of both internal and external environments. Two common frameworks are SWOT (strengths, weaknesses, opportunities, and threats), and PESTEL (political, environment, social, technology, economic, and legal). These frameworks help decision makers to contextualize their organization, which then can be communicated to the rest of the organization. The way communication is typically handled is through strategic planning documents or dashboard/canvas style grids. The art of business is the ability to make decisions in the face of ambiguity. Strong business acumen is the ability to make wise decisions (in retrospect) despite the uncertainties involved.

Lastly, there are core competencies; these are the differentiating aspects of the organization that set it apart from the competition. Core competencies are the elements that strategic plans identify for investment and development. Remember, market competition selects winners that master differentiation. It is an organization’s business strategy that lays out the framework to build the core competencies that will ultimately differentiate it from its competitors.

Corporate meeting room with a large table and ten chairs with a frosted glass wall

What is important is the need to differentiate between strategy and tactics. For most folks, their day to day is focused on the execution of business tactics, which are the short-term plans that are informed by the business strategy. Tactics are bound procedures, in that whatever is executed has a clear chain (ideally) of reasoning back to the strategic plan’s goals. For many organizations IT decisions are bound by strategic plans, but rarely do the constraints or opportunities provided by technology inform the strategic planning process. For instance, a rapidly growing organization decides to expand into a new region. To meet targets, sales managers adopt a more robust CRM solution to streamline sales and marketing operations. Sounds perfectly rational; however, consider that the chosen CRM solution will likely be under the constraints and mind-space of the moment, which if considered at the strategic level the decision could be different.

What is technology? For this blog, technology is any mechanism created by humans used to enhance the capacity of an individual or group to do useful work. While digital technologies have dominated in the last couple decades, it is important not to forget that digital technologies need to interface with the physical world. There are lost opportunities if innovation effort is purely focused on the digital domain.

Technology and strategy interplay. Leaders must decide if technology is an external environment consideration or a foundational building block to their organization. A business is a mix of people, processes, and technology. There is a tension and interdependence between these three pieces. Any technology investment will have an impact on people and processes. People will need to learn new skills , and processes will need to be adapted to meet any new technology implementation. Questions to assess if technology is a strategic or tactical element for your organization. Should investment in technology be market need driven? Should technology investment be given room for open and unstructured exploration and experimentation? Should priorities target incremental improvements or organization transformation? Does the organization have the internal capacity to foster entrepreneurs to drive change? Does the leadership team have gaps in its technical awareness? If the answers to these questions point to a high technology need, then it is critical to have both the capacity to provide technology leadership and ensure its consideration at the strategic level.

Traditional strategic planning doctrines have classed technology as an external environment consideration in the strategic planning process, but increasingly, technology is being seen as the critical driver of competitive differentiation between firms. Differentiation comes from internally generated competencies; therefore, the strategic planning process for modern businesses must factor in technology as a foundational piece.

Cost of Building a Web Application

Episode 62

One of the most common questions we get asked is, “How much will this cost?” The answer is a disappointment to most as there is an expectation that a product or service will have a clearly defined price. Unfortunately, with custom development it is extremely difficult to get an accurate number. With enough experience, a vendor can provide a cost estimate from intuition; however, the estimate will come with a long list of assumptions. The purpose of this blog is to explain why determining the cost of building a web application is so difficult and to provide a rough guide for you to estimate the cost of your idea.

There are four major factors that will influence the cost of an application. They are the type of application, development resources required, payment terms, and the allocation toward maintenance, security, and scalability.

Native and Web are the two most common types of applications (we are going to ignore hybrid apps). A native application works on the device’s operating system, which means it has direct access to all device features like the camera, microphone, thumbprint reader, etc. Web applications work through the browser, which has access to some features of the device, but the application is external from the operating system. Native application development is more expensive but can have better performance and feature depth than equivalent web applications. For more details check out this blog.

There are numerous job titles related to development resources with the job function often interchanged, shared, or blended. A typical core development team will consist of:

  • Developers: make technology work, break stuff, and continuously Google for solutions,
  • Quality Assurance: verify the application works in the way it was intended to,
  • Business Analysts: translate business and technical needs,
  • UI/UX Designers: translate user desires into a design that is pleasing, with an intuitive, easy to use experience; and,
  • Project Managers: coordinate activities and ensure the team and resources are being used effectively.
Woman overwhelmed holding forehead and showing wait gesture

This list is not comprehensive, but it should be obvious that the more specialized the skillset (architect, security specialist, platform specialist, software language specialist, etc.) required to meet your application’s objectives the more difficult and expensive it will be to source that talent. Likewise, if your project is being developed by a single individual, it will have risks associated with that individual’s gap in competencies. For more details on team size and competency, see this blog.

There are three ways to source development resources. They are in-house, individual contractors, and outsourcing. At a particular scale an organization’s IT requirements are large enough that it can be advantageous to have development staff as employees. While full time employees can be less expensive than outside consultants, the fixed nature of their employment means that unless the individuals have opportunities to grow, the development knowledge base of the organization can stagnate, especially if the organization views IT as an overhead.

With individual contractors you can target your desired skills and cost, with resourcing flexibility that is hard to do with full time staff. This methodology is growing in popularity for its flexibility, but specialists will command high compensation and are hard to find, likewise assembling a team of contractors can be problematic as the newly founded team has not worked together. Most commonly, contractors are used to fill gaps in existing teams instead of replacing them completely.

Lastly, you can outsource. In this case you are likely paying more upfront as a firm will have more overhead compared to individual contractors, but you are gaining access to specialized resources who have likely worked together and on a diverse set of projects. Cost and competency can vary dramatically between regions, individuals, and organizations. Specialization and the overhead required to assemble a consistent team and development process will differentiate between low and high-cost service providers. While you may be able to save money with cheaper resources, you may introduce project risks that could lead to additional expenses later. It is not uncommon for two firms to be hired: the first to build the application, the second to fix or repair the application.

When paying for development services there are two methods: Fixed price and time and materials. Fixed price is desirable for the purchaser; however, it places the vendor under a lot of risk. To compensate for risk the vendor will adjust their offer price. Therefore, even though your organization might enjoy a fixed price, the risk profile of the project will be priced in. An alternative is to use a fixed price but allow for the project scope to be flexible. Time and materials are the other side of the coin. For best results when using agile methodologies, ensure that working code is delivered ASAP and value is delivered on every sprint. While desirable for the vendor, this does place risk on the purchaser if the project runs into trouble. For more details on these dynamics see this blog here.

The last major cost factor are the considerations before the first line of code is written. These are often afterthoughts. It is a good practice to budget 20% of the initial development cost into ongoing maintenance. While it is not uncommon for an application to remain untouched for years with attention only paid when something breaks, the nature of technical debt demonstrates that the cost to fix errors compounds over time like interest on financial debt, with dividends in the form of future savings if one is proactive. Maintenance is highly coupled with security. Security vulnerabilities have a near binary cost structure, ignoring security considerations during development or maintenance can have catastrophic consequences. Lastly, it is important to ask, “what if our application is wildly successful, can it scale?”. Spending an extra bit of time and resources upfront to ensure the architecture is ready to scale can have a large impact on the application’s cost. It is very important to understand that a software application’s development cost is only one piece of the cost equation.

Calculator on desk

At this stage, I hope to have painted a picture highlighting the numerous variables that would make it extremely difficult to put a number to the cost of a web application. The cost factors discussed are high level, it would be far too difficult to get to the depth of comparing .NET versus Node.js on development cost. But let’s try to put some numbers to paper.

Simple

  • 3 to 5 features
  • 1 to 2 user types
  • The entire application can be explained in detail on a single sheet of paper
  • We want to build an app that notifies users of a particular event and provides guidance
  • Real world example: An app to notify users of municipal waste pick up days
  • $20k to $100k

Moderate

  • 6 to 15 features
  • 3 to 4 user types
  • Multiple simple interactions in one application or an application with a singular purpose containing moderate complexity in function.
  • We want to build an app that notifies users of multiple event types, provide two-way information between multiple users, and allow users to post or make requests for information
  • Real world example: An app that allows factory floor operators to coordinate with logistics personnel to request new materials and remove finished goods with a summary for management.
  • $100k to $500k

Complex

  • 15+ features
  • 5+ user types
  • We want to build an application to process insurance claims
  • Real world example: Any off the shelf customer relationship or enterprise resource management application
  • $500K+

There are two ideas I want to close with. The first is why the $20k minimum? This is a safe(ish) estimate for a minimum cost where one would not be gambling with low-cost freelancers or development talent. On the simple end of the spectrum, you primarily get what you pay for. It would be difficult to find a firm that staffs all the required resources and is able to make an app for less. This does not hold true as complexity grows, as uncertainty is a more powerful driver of final cost even if development talent is allowed to be variable.

The second element: why is there a massive range between categories? Custom development projects have a high degree of diversity, and if a repeatable solution were possible, there would already be a SaaS solution on the market for that problem. Initial assumptions can be proven incorrect after development starts, which means with larger projects the more costly it is to change directions. With greater application complexity, the higher chance for errors, the greater upfront cost in design, the higher likelihood of requiring specialist skills, and the larger compounding rate on technical debt while errors remain undiscovered. It is the exceptions where most development costs are spent, with complexity there will be more exceptions. If this blog does anything, it will be to limit the disappointment when your vendor provides a vague estimate to the development cost of your idea.

Microsoft 365 Price Increase

Episode 61

Last week in a blog, Microsoft announced that they will be increasing the commercial pricing of Microsoft 365 licenses.

The price increase comes after a massive development push as the organizations moved on mass to working remotely. Working remotely requires collaboration on a greater number of dimensions, ever present vigilance regarding security and privacy, and continuing to innovate through the introduction of automation tools and machine learning.

Screenshot of a typical Teams digital conference meeting with nine participants

New pricing

On March 1, 2022, Microsoft will update its list pricing for the following commercial products (Prices are in US Dollars per user/month):

  • Microsoft 365 Business Basic (from $5 to $6)
  • Microsoft 365 Business Premium (from $20 to $22)
  • Office 365 E1 (from $8 to $10)
  • Office 365 E3 (from $20 to $23)
  • Office 365 E5 (from $35 to $38)
  • Microsoft 365 E3 (from $32 to $36)

These increases will apply globally with local market adjustments for certain regions. There are no changes to pricing for education and consumer products at this time.

Benefits of a PWA

Episode 60

We live in an age where there is an app for (almost) everything. Powerful mobile devices freed us from the desk and enabled productivity from anywhere (with reliable internet access). The two dominant platforms for mobile development are Android and Apple (iOS). Traditionally, mobile applications were built specific for the operating system of the device. These are commonly referred to as native applications. Introduced by Chrome developer Alex Russel and designer France Berriman in 2015, Progressive Web Apps (PWA) aim to build a better experience across devices and contexts with a single code base. A single codebase supporting any device is a considerable improvement over multiple versions of the same application for every supported platform. This blog will explore the benefits and drawbacks of PWAs and what they mean for businesses considering application development.

Not all apps are the same. A native application works on the device’s operating system, which means it has direct access to all features like the camera, microphone, thumbprint reader, etc. Progressive web applications work through the browser which has access to some features of the phone, but the application is external from the operating system of the device. This is what makes native apps, native.

Since 2015, native app development has declined as businesses look to find efficiencies and improve user experience. The core benefit to PWAs is that they can run on any device or operating system. This offers significant cost savings both during development and maintenance as the application is developed once and there is only one codebase to maintain. Additionally, all updates are available automatically instead of relying on the user to update the application on their device. A PWA runs on the device’s web browser so the application takes very little device storage, can work offline, and security is provided by HTTPS which allows for “browser to server” encryption. On top of all these benefits, the experience for the user is just like a native application. Without knowing it you have more than likely used a PWA, here are some popular examples:

  1. AliExpress
  2. Financial Times
  3. Flipboard
  4. Forbes
  5. OLX
  6. Pinterest
  7. Starbucks
  8. Trivago
  9. Twitter Lite
  10. Uber

However, there are some drawbacks with PWAs. Because the PWA operates in the web browser of the device, it will have some limitations on functionality which will be dependent on the operating system of the device. For instance, it could be more difficult to integrate the thumb reader as an access feature using a PWA compared to a native application. Another drawback is that instead of running as an application on the device, but in the browser, the PWA will consume more battery life due to higher CPU requirements.

Where PWAs struggle, native apps shine. Running on the device’s operating system improves performance and battery life. Full device integration allows for a feature rich solution and there is a large developer community to draw upon. But like a sports car, the specialization that enables high performance comes at a cost. Development and maintenance are more expensive, and more resources are required to launch the application. Likewise, users must install the application on their device to have access to the application, rather than navigating to a web link for a PWA. With how rapidly mobile devices change both in hardware and operating systems, ensuring the user experience is uniform across devices, managing security, and responding to changing market demand requires significantly more resources compared to a single code base provided by a PWA.

To summarize, native apps are best when you are building a feature-rich solution that requires advanced device functionality that is of greater importance than budget or time to market. PWAs can provide significant value for businesses today as solutions increasingly require experimentation and continuous iteration from feedback. PWAs are extremely powerful for getting a minimal viable product to market quickly and inexpensively. Likewise, if the application is expected to have a very long shelf life, maintaining a single code base will provide significant cost savings long term. Additionally, frameworks like “React Native” are in constant development and over time have been closing the gap in functionality between native apps and PWAs. In the world of business, agility trumps functionality depth; therefore, in this context, PWAs can provide far more value to your organization than native applications.

Software development is a social activity

Episode 59

There is a myth in software development that applications are built by lone programmers. There is this phenomenon of rock-stars performing heroic feats of self-sacrifice to single handedly deploy world changing applications all by themselves. However, any application that is developed today is the result of the cumulative contributions of thousands of smart people. Walking through my personal journey of learning SQL/SSIS/Power BI I hope to convince you that, even in the most isolating circumstances, software development is most productive when viewed as a social activity.

Modern organizations need to operate both at scale and with precision. These requirements are beyond the capacity of a single individual no matter how brilliant or capable they are. At the highest level, it is my opinion that the goal of software development is to produce quality working software quickly. There are numerous competing and overlapping ideas that attempt to describe the “best” way for software to be developed. Two ideas that help achieve this goal are Continuous Delivery and Continuous Integration.

Screen grab showing DevOps Commits in a GIT environment for software development

Continuous delivery is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time and, when releasing the software, without doing so manually. It aims at building, testing, and releasing software with greater speed and frequency. Continuous integration is the practice of merging all developers’ working copies to a shared mainline several times a day. In short, we want to deliver small features or changes regularly. If we consider technical debt, which as been covered in this blog series, the concept extends all the way down to the single line of code as the longer a feature remains isolated from the larger system (continuous integration) or the longer it remains isolated from users (continuous delivery) the greater the risk that the new code will break something, or that revisions will be more expensive after user feedback.

How does this all fit in with lone programmers? If the development team is one, there is no need to worry about feature branching. If the developer happens to be also the end user, development can be done directly in the production environment. This is exactly the case for my learning journey. The task was to replace a set of spreadsheets I have been maintaining for a video game, EVE Online, that I have been playing for over a decade. These spreadsheets were my pride and joy and I used them to showcase my Excel brilliance. It was not until I joined MERAK and engaged with professional developers that elements of my naivety became visible. The turning point was attending a PowerQuery talk at a Microsoft Ignite Conference that highlighted how my knowledge only covered a tiny fraction of what was possible. The major learning lesson was that no matter how confident one is in their skills there is always more to learn, so lose the ego, and be open to influence.

Two years ago, I started the process of replacing the functionality of the spreadsheets with a combination of SQL, SSIS, and Power BI. With Power BI being a natural extension of Excel, the first stage was to replace all Excel visualizations and logic tables with Power BI acting as the new front end. This took about a year to fully realize with numerous re-writes over that time as knowledge gained on each pass would highlight architecture or implementation weaknesses worthy of a re-write. Later, with Excel acting both as a database and the ETL engine for requesting and mashing data from numerous REST endpoints, refresh performance was horrible both for Excel and Power BI. Excel is magical, but it is not a great tool for this purpose. For anyone learning Power BI, and more specifically PowerQuery and DAX, I strongly recommend going through the exercise of building the same solution with the first iteration using as much PowerQuery as possible, and then on the second, using DAX as much as possible. While both systems are extremely flexible, if done right, they are highly complementary. A general rule of thumb is do everything you can at the row level in PowerQuery and leave aggregations for DAX.

PowerBI

Once modestly comfortable with Power BI, the next task was to replace the data storage and ETL layer with SQL and SSIS. This step would have been impossible without guidance from MERAK developers. The replacement was done one table at a time focusing on datasets that ran over the row limits of Excel. After about 6 months Excel was retired. Refresh cycle times went from 45 minutes down to three. Super fast, easy macro level control of connector parameters, no row limits, automatic backup, deeper analytics capacity, and so much more. The new system operates at a completely different level compared to the old one.

Throwing away all that messy collaboration stuff might initially seem like a productivity boosting decision. Personally, I have moments of intense productivity: when my environment is free of distractions and I have had some time to settle my mind on a problem, with good music in the background. This highly productive state is isolating. It’s that grooving, head bopping, keyboard smashing, with the logic flowing as fast as I can input it. There is an intense feeling of pride after such a session as it feels like much progress has been made.

In the previous few months, I have been working on a significant re-write of SQL and Power BI code. Now that I have a good grasp of basic SQL script, this re-write has had numerous moments where I was in that groove state. While the nearly finished system works and feels like a great accomplishment, it was not until I began showcasing the result and speaking with others regarding its architecture that once invisible flaws began showing themselves. Now one way to rationalize the flawed process is to communicate the future work as “performance gains”, “refactor for readability”, “hardening”, “bug fixes”, “quality check”, etc. If I checked-in with others significantly earlier many of the flaws could have been dealt with before they grew to a state requiring large revisions. While this latest revision employed code structures, I have used for the first time, the isolated iterative process could have been significantly more efficient if I connected with others earlier. Instead I learned the hard way by making mistakes.

Bad Query

Ok, so just replace me with a more experienced and competent developer. The kicker is that the problem will persist, as future flaws will scale to match the competency of the next individual. While a better developer would not have made the newbie mistakes I did, there are potentially more costly revisions waiting for the organization due to the increased complexity capacity of a stronger developer. Now scale from a single developer to an entire team of solo rockstars and we have a perfect storm where significant talent and effort could be wasted. You do not have to look too hard to find stories of organizations throwing away years of development effort as individual features are unable to integrate with each other.

People are shaped by their environment. We are highly social and will quickly adapt to the norms of the group to fit in. This is why business strategists spend so much effort and time on corporate culture. Software development practices will be shaped by the culture of the organization as well as the tools (and governance structure) used to facilitate the development practice. From a culture perspective we want to acknowledge that errors will happen and that finding errors is not a source of blame but a source of excitement and learning. Paired programming, code review, lunch and learn, or a comfortable leisure area; use the tools that work for your organization to encourage developers to socialize and share ideas. Structure development governance practices that ensure work is committed regularly and branches are not left to fester. Acknowledging that software development is a highly social practice is critical to keeping your organization technology relevant.

Backing up to the cloud

Episode 58

It is inevitable that data will be lost through a hardware failure, mistaken deletion, power failure, natural disaster, or malicious actions. To mitigate such events, it is common practice to make copies of our data just in case we need to restore what was lost. While appearing simple at the surface, developing, and acting on a backup policy has many elements to consider, whose selection criteria is unique to the organization. This blog is going to briefly cover considerations when backing up data to the cloud.

Increasingly, organizations seek to build a data driven culture; as a result, more data is being retained and analyzed, especially as bottom-up methodologies, facilitated with machine learning, gain popularity. It feels safer to keep all data and let the model training process or the data junkie decide what is valuable. As data quantities scale exponentially so do data bandwidth and processing requirements. No business model can sustain indefinitely exponentially growing system requirements, and eventually actions will need to be taken to add constraints to the system.

Rays of sunshine shoot above a thick cloud bank

Not all data are the same, nor should they be backed up in the same way. Data could be categorized by file type, size, timeliness, and classification. File types could be text files, application documents, images, audio, and video. Size could range from 100s of bits to exa-bytes. Timeliness is the estimate of the useful life of the data and how frequently it will need to be accessed. Lastly, classification could be public, internal, confidential, and restricted. With this simplified model, we can categorize data and see the need for differentiated backup policies.

With a model in hand to categorize the data, let’s now look at the considerations on cost and recovery. Cloud technologies feel so ethereal that one may believe that the data is really in the clouds, but the information is physically stored in a datacenter somewhere. That somewhere might have implications for organizations with contractual or regulatory requirements that demand that data is retained in the country/jurisdiction of origin. Likewise, that physical location may experience a natural disaster and if that datacenter is the location for all your production and backup data, you are going to have a bad day.

While the common saying is that backup storage is cheap, this statement is only true under certain conditions. For instance, Google Drive and OneDrive are free to a point. Costs can begin to add up if backup policies are not optimized for their storage requirements. To help determine data storage requirements ask: What data needs to be stored in the cloud? What is the backup schedule: daily, weekly, monthly? How many copies are retained? When are they deleted? What flexibility is required when restoring lost data (single file, single machine, single server, entire environments)?. Should backup copies be stored in multiple regions or with multiple cloud providers? Hopefully now it is easier to see that costs can balloon out of control if data is retained too long, excessive copies are created, multiple cloud providers are used in multiple regions, or archived data is required to be accessed frequently.

A network switch with ethernet cables

Any conversation regarding digital technology goes hand in hand with security. Cloud providers spend billions of dollars on security research each year. Due to their scale, it would be impossible for individual organizations to match the sophistication and expertise of these providers. However, it would be foolish to assume that your data is 100% safe because it is in the cloud. Security is a shared responsibility between the cloud provider and its users. All the security research in the world will not keep an organization’s data safe if the administrator’s access credentials are compromised. It is extremely important that backup access and permissions are restricted to the minimum number of individuals. Lastly, disaster recovery is an event that one would hope to never have to experience, and due to the infrequent nature of such events, the practice of recovery can be very inefficient. This is where planning for the occasional “disaster day” could pay huge dividends by learning from mistakes in practice environments.

In short, the cloud provides an extremely robust platform for your organization to secure its data against disaster events. At the same time, excessive redundancy, weak data management, or poor security practices can quickly erode the competitive advantage provided by the cloud.

Reopening and the Purpose of an Office

Episode 57

We can almost feel it. Soon the worst of the pandemic will be behind us, and we can get back to visiting family, friends, and attending in person events. Walking into a packed room will not feel the same for a while, but I very much look forward to it. Since March of 2020, MERAK staff have been working remotely. Ignoring the physical isolation, we are very fortunate that the shift to remote was not disruptive and work carried on normally. Early impressions of working remotely were captured in this previous blog. This blog is going to look forward and explore the importance of a physical space as non-essential businesses will soon be reopening.

It is safe to say that when we go back to normal it will not be an exact replica of pre-pandemic life. The forced transition to remote work all at once was an epic experiment, a once in a lifetime dream for social scientists. The predominant idea regarding productivity was that if employers and supervisors could not see staff, the staff were not being productive. This assumption was proven false and I have yet to find reporting of organizations shifting back to a full-time office model as soon as it is possible to make back productivity losses.

Employees are just as productive working remotely as in the office. Everyone is unique, but I noticed a few themes that could explain why remote work does not impact productivity. The first is that some employees are more productive at home as there are fewer distractions at home. The second is extended workdays, using time that would have been for commuting to work, or for those more distracted at home, working extended hours to make up the difference. Lastly, is pandemic boredom, there is only so much video streaming and home baked bread one can consume. With the blurred and extended workdays, the risk of burnout is real and will be a challenge going forward.

Arend van Eck and Brett Bickerton stand in the bullpen of MERAK Systems Woodlawn Road Office in Guelph, Ontario

If productivity, on average, is not impacted, what do employees think? Most individuals in my social circle absolutely love working remotely for a diverse set of reasons. The most common theme has been around flexibility. The ability to quickly take care of personal responsibilities during working hours instead of letting it all pile up for the end of day or weekend has been a massive benefit. Also, little perks like online purchases delivered to the home instead of the office, taking a break to tackle quick chores, or giving that little bit of extra attention to kids and pets. Instead of fitting life around work, life and work priorities are shared. Many of us have seen an improved quality of life through the many hours gained by no longer commuting. For the average commuter, it is fair to assume that remote work gives the worker an extra hour a day. Many individuals might just have a few moments at the end of the day for themselves, an extra hour (not at the cost of sleep) is huge.

An interesting trend as people get settled with remote work lifestyles is placing significant attention on their personal workspace. While I am completely disconnected from regular TV, I can imagine that we will be flooded with home office makeover TV series and YouTube channels. I see the office flex of the future will be communicated through fancy webcam backdrops. Flexibility extends to the times of the day one works, some people have discovered they are more productive in short sprints and have split their workday into segments or two blocks containing a morning and evening shift. And let us not kid ourselves, work mullets with, “business on top and whatever works on bottom” is awesome.

Sean sits on the roof of his home with a laptop computer.

From the employer’s perspective, with non-essential businesses closed, they are currently paying for a space that is not being used. Some organizations have invested large sums of capital, care, and attention to make employees feel comfortable at work. Adding amenities like kitchen spaces, lounges, sunrooms, gyms, day care, green spaces, etc. Likewise, the same care and attention in providing a diversity of workspaces to ensure every employee can find the right space to work productively, whether cubicles, private rooms, open tables, and small nooks. Let us not forget the effort that goes into planning collaboration spaces and office events. Lastly, the emergent behavior that results in everyone sharing the same space; I very much miss lunch time euchre and the conversations that came with it. All these administrative expenses are going to waste if employees are not coming to the office.

With a fully remote model, not tied to the talent pool of a local geography, employers can access anyone with a reliable internet connection. Global scaling is no longer dependent on expansion of physical spaces but expansion of reach through personal networks of employees. Additionally, employers could split the difference and offer a hybrid policy, scale down the office to a shared space or share office space with other organizations. The primary benefit of working remotely reported by my social circle was choice, so a hybrid solution seems obvious. Though to get employees to come back to the office, providing amenities not easily available at home is going to be a determining factor for employees weighing the value of staying home versus commuting in.

If employees would prefer to work remotely and employers can save a ton of administrative costs, why have an office? Is there value to having a policy where employees are expected to come in? There are some not so obvious benefits of an office and to expect them to primarily work at the office. Firstly, an office is not just a place to work. Work is where adults make a lot of their new friendships. While it is entirely possible to build meaningful relationships while never being in the same space, a remote friendship is different. Despite being able to connect virtually with family, friends, and colleagues have you noticed that random conversations with strangers during the pandemic last way longer than normal? Ever just stay on the line with a telemarketer for the company? We are highly social creatures and physical socialization is critical for good mental health.

Jacob is visited by his cat who patrols around his shoulders.

An office space extends socialization beyond immediate work collaborators. There is no equivalent to water cooler chat or passing a colleague in the hallway in the digital space. Physical spaces shape culture. Without a foundational location it is much more difficult to shape culture. With remote work, culture is now shaped by the micro-communities that form among close collaborators. Video calls miss critical body language that would be easily registered during an in-person meeting. If an organization were to go to a hybrid or fully remote model, socialization and the ability to shape culture will suffer.

Remote work is feasible, providing an additional choice for leaders, and this is great. However, I fully expect in a few years that the hidden costs will slowly emerge over time as we gain more experience in this new normal. In the meantime, I feel extremely fortunate to have been able to experience the pants-less work revolution.