Balancing Security with Useability

Episode 56

A day does not go by without a news article covering the latest high profile security breach with thousands to millions of users or customers impacted. With data being the commodity of the 21st century there is a war over its control and access. Unfortunately, companies often only take security seriously after an incident has occurred, a proactive administrator may ask to have “all the security”. However, a maximumly secure system may come at a significant cost to useability, fostering a culture where users find workarounds to complete their tasks.

A few months ago, I needed to go into the office for the first time in slightly over a year. Walking into the time capsule that was my cubicle, I was met with a security policy that required me to reset my password. We use two factor authentication, so off I went resetting the credentials on the 4 devices I use for work. Shortly into the reset extravaganza I was prompted with a notification that I exceeded the number of two-factor token requests and to try again later. With only some authenticated services, mixed across devices, I was stuck for about an hour with limited access while I waited for my token pool to refresh. While not the end of the world, it is a simple example of a situation where in the future, I am highly incentivized to find any way possible to prevent a password reset from happening again.

Man in red tracksuit standing near a hedge with sunglasses. The text overlaid reads: "I changed all my passwords to 'incorrect' so whenever I fortet, it will tell me, 'your password is incorrect'!"

We have all had that moment, when setting up a new account, where the password requirements drive us nuts. Make sure to have capitalized and uncapitalized characters, numbers, and special characters, and at least 8 digits long. Make sure to use different passwords for every account you own, change passwords regularly, and do not repeat between services. Passwords are just one layer of security of many, but we are all familiar with password schemes that make accessing or regaining access an absolute pain. We are all guilty of recycling passwords, using incremental digits on resets, and other workarounds to make our lives easier, however these workarounds create exploitable vulnerabilities. The weakest link in your organization is your users. Phishing scams are getting extremely good at spoofing regular communications. All it takes is a momentary lapse in vigilance, a user pressed for a deadline rushing through work, or an unaddressed application vulnerability, and your organization is compromised.

No system is 100 % secure. At best we can achieve a system that is more effort to breach then the potential reward. Security can be thought of simplistically as defense layers and partitions. A password is a layer and what that credential has access to is the partition. The more layers of defense, the harder it is for an attack to be successful; this is why two-factor authentication is so important and it stops 99 % of attacks. The better partitioned users are, the less damage can be done. Limit access to only the data and administrative control employees need and nothing beyond.

Security by design is critical. Security needs to be part of the development process instead of a feature added after the fact. Additional layers of security “after the fact” is like adding locks to a cheap door, more add-on locks means users of the cheap door need more keys, need more time to unlock, and despite the extra locks, the door will be kicked in anyway.

Useability is as much dependent on culture as it is on the design of the system. Consider a culture where the users view the security features like multi-factor authentication as a chore, view shared folders as the best way to manage their work, see user roles as a hassle or a roadblock to productivity, and are wary of software updates for fear of breaking something. Designing a system for users in this category would be challenging to say the least. Part of this problem can be solved with training and awareness; however, making security a priority is the task of your organization’s leadership to both prioritize security implementations and strongly communicate the value. Secondly, security is not a set and forget system, one must regularly review policies and implement changes as required. Security vulnerabilities are much like technical debt, but instead of the cost compounding over time, the cost of security debt is asymmetric with risk, where small increases in risk may have massive costs with a breach.

Security and useability may seem at odds with each other, but they are in fact core considerations when developing software or managing a technical environment. As people are the greatest vulnerability, equally considering culture, training, and the technical environment will yield far greater results than just technical improvements alone.

Responding to RFPs a vendor’s perspective

Episode 55

“What were they thinking?!” A not uncommon question that we ask ourselves when reviewing RFPs. Public institutions do their best to offer opportunities to vendors with a process that is both fair and transparent. It is a hard task to ensure that the right vendor or solution is selected, a fair opportunity is given to all vendors, and the whole process is timely and efficient. At best you can pick two, but realistically a compromise between the three will be made. I have seen several RFPs with response requirements that are impossible to meet with the information given, then adding insult to injury, responses to questions were equally unhelpful. More troubling are invites late into the bidding process with clear indication that few vendors have shown interest. Technology relevance is a moving target, but procurement is the easiest step. The purpose of this blog is to highlight what we look for in RFPs to help us determine how we write our response.

Worried lego business man sits at a lego desk

If I were to broadly categorize all RFPs they would fall into three buckets. The first is a need for strategy and/or architecture expertise. This is most common with organizations that are of a size that do not warrant an architect or Chief Technology Officer (CTO). Though it is not uncommon for organizations to still make this request with the above resources. What naturally follows is the second category which is implementation support. A direction has been set and now it is time to execute. It is perfectly acceptable to procure expertise to implement a particular solution, whether that is custom development or an off-the-shelf solution. The danger in this category is when combined with strategy and/or architecture services. If a vendor sells hammers, everything is a nail. Asking for architecture or strategy advice from such a vendor will lead to the predictable result of your organization owning a hammer. The final category is resources. An organization might have all the above but needs more bodies or a specialist to work on a particular problem. This is an increasingly common request, especially as the demand for technology talent seems to ever increase.

What problem are you trying to solve? This is the question we ask every time we review an RFP or engage with a prospective client. It is a simple question yet answering it can be very challenging. Modern organizations are a mix of people, processes, and technology. Much like the vendor that sells hammers, technology companies are biased towards viewing every problem as a technology problem. But is the problem you are really trying to solve a technology problem? Are your staff willing, capable, and able to undertake the processes your organization has established? Have your processes adapted over time, or is this the way it has always been done? External consultants do not “fix” people, your leaders do. All the external training and workshops in the world will have little impact if the willingness to internalize this information is not mutually shared by both the leadership team and staff. If a consultant develops a process, it will yield value, firstly if it is used, secondly if it is used within the frame of reference it was designed for. One critical piece left to consider is where does the institutional knowledge lie after the job is done? If the knowledge walks out the door with the vendor does this leave your organization in a more resilient state than before the project started? If the answer is no, your vendor will be more than happy to arrange a managed service contract to ensure that the knowledge your organization invested in continues to be available.

A hammer lies among a bunch of bent nails and one straight one half hammered

Time to write an RFP: what can be done from the vendors perspective? Specify the solution upfront at your own risk. If this approach is taken, I would strongly suggest revisiting the last two paragraphs. What problem are you trying to solve? If the solution does not address the real problem, the solution will create another problem. Was the solution suggested by the vendor who sells hammers? Is there good leadership behind the strategy and architecture plans? Red flags for us are when organizations are asking for a replacement of a product that is relatively new, a request for a specific solution or product with no mention of previous effort or rationale for selection, or a long chain of procurement requests for tasks that could have likely, from our point of view, been handled internally. These red flags will give cause for hesitation in responding or add to our risk modifier for the project.

Another common challenge when responding to RFPs is being given little to no context. Providing your organization’s mandate will help us structure our response to match the tone and language set of your organization; however, there are a lot of dots to connect between a mandate and providing a technology solution. Some RFPs will have little detail on the current state of the technical environment, with no guidance to help vendors determine magnitude. How many total and concurrent users, applicants per period, or data size, location, and store and transfer requirements. A web application for ten users is very different from one for 10,000 users. Likewise, if the request is to replace a paper process with a technology solution, consider the marginal benefit of the replacement. Technology can be great, but it is possible to see no net benefit. Some automation projects could have the cost of development and maintenance exceed the time savings through replacing the old process. If the project’s magnitude is difficult to determine we will have to estimate based on worst case scenarios and if the request is fixed price, the lack of clarity is enough for us to decline to respond.

In a similar, view with magnitude is risk. Some tasks are riskier than others. Replacing a highly interconnected custom developed application is very different from starting with a blank slate. With greater complexity should come greater detail in the description in the current state of your IT environment. If the project seems “off”, we must adjust our risk weighting appropriately, especially for fixed price opportunities.

Lego workers grin pushing carts full of coins to a happy boss

Consider right-sizing the RFP response requirements with the size of the opportunity. A targeted and well documented proposal can easily take two weeks to write. If the RFP ask is mismatched with the opportunity size, vendors have no choice but to provide a cookie cutter response and inflate the rate to match the perceived risk. On the extreme end if the RFP asks for too much we will not respond.

While this might be a rant more than a blog, I would like to balance the discussion with ending it with an RFP request that I thought was very helpful from the vendor’s perspective. A challenge for organizations is ensuring that vendors can do what they say they can do, with technology changing so rapidly it is difficult to remain technology relevant even for seasoned practitioners. Assuming that your RFP has been able to avoid some of the major landmines from earlier here is a suggestion. Find a very specific problem that your team is looking to solve and ask vendors to provide a short video demonstrating a solution. What I like about this approach is it makes sure that the RFP issuer has spent some time breaking down their high-level problem into consumable pieces that will give vendors valuable insight into the issuers current state environment. Likewise, it forces the vendor to directly demonstrate competency, but also demonstrates that their knowledge and expertise is transferable. This exchange can help both organizations get an understanding of what it would be like to work together. Getting a feel for how working together could be like is invaluable for vendors, as the unfortunate consequence of the RFP process is the sterilization of a relationship into the diluted form of contractual obligations.

How to Pick the Right CRM

Episode 54

Every organization has the need for a system to manage client interactions and sales processes. This is where a Customer Relationship Management (CRM) solution can reduce administration, improve customer service, and help close more deals. The CRM market is very mature; in all but the most niche cases, there is an off the shelf solution that will meet requirements. However, a mature off-the-shelf market means overwhelming selection. Should we select the most popular? Cheapest? Most flexible? Best targeted to us? This blog will provide a simple framework for selecting the right CRM for your organization.

What problem are you trying to solve?

Far too often a solution is proposed before there is a solid understanding of the problem. Here are some problems that a CRM might address: we need to standardize our sales process, make sure we follow up on sales leads, identify gaps in our sales process, reduce administration time in our sales process, or open our sales data to other parts of the organization. Not every problem is a problem that requires a software solution. Recall that organizations are a mix of people, processes, and technology. Applying a technology solution to an incomplete process, or a good process that the people have trouble executing will add problems instead of solve them. Additionally, your organization might have a fantastic set of people and processes, but a new CRM solution cannot replicate what is in place one for one. For a discussion on updating processes versus custom/configured products see this blog here.

A person attempts to force a square peg into a round hole with a hammer.

Review compatibility with current software

When installing a new CRM or replacing an existing one, this system will likely be one of many other products already deployed at your organization. There might be business productivity platforms like Microsoft 365 Business or Google Workspace, Enterprise Resource Planning (ERP) systems, file stores, and planning tools. Sales processes touch every part of the organization, so it is essential to take the time to understand how a new CRM product will interact with your current or future planned software environment.

Understand the implementation

Contrary to marketing, implementing a CRM is not as simple as buying licenses and assigning users. Some questions to ask:

  • How will existing sales data get migrated? Can the data be imported?
  • What data cannot be imported?
  • What if manual entry is the only option? Would the organization have to start fresh?
  • How does the out of the box process match with your existing process?
  • What are the costs to configuring the tool to follow the existing process versus changing the existing process to meet the standard configuration of the CRM?

CRM developers must take into consideration designing a product that will best fit many users, otherwise the solution is custom development. When you purchase an off-the-shelf product, you are buying a solution for the requirements of other organizations. For more details on product development see this blog here.

Young woman stands before a decision board full of sticky notes and articles to decide on best action

Prepare for user adoption

By this point, I hope to hit home that updating software systems is not as simple as buying licenses. Likewise, people do not update like software. Users need to be ready, willing, and able for the implementation to be successful. Budget time for gaining buy in and training users on the new system.

Fully commit to the trial

There is only so much that can be answered through upfront analysis; therefore, experimenting with a trial is an excellent way to prepare users, understand the implementation, review compatibility, and confirm that the selected solution will solve the problem(s) that were identified. I will be the first to admit that I have signed up for many trials, left them to rot after spending a few hours clicking around. Treat the demo like the real implementation with as much seriousness as you can practically afford. Trials can be rolled into a full subscription very easily; this is by design by the software vendor. If the trial crashes and burns after a solid effort, consider the experiment as a massive success as learning the result with a no-going-back implementation would be a disaster. Now trials might not be practical for some organizations, however, I want to make clear that someone spending a few hours clicking buttons will likely yield little value compared to making a reasonable attempt to understand the solution before committing to it.

The Power Platform Centre of Excellence

Episode 53

Numerous times in this blog series, I have covered the concepts of Power Users and no- and low-code application development platforms in the context of technical debt, development governance, development practices, and using these tools and practices to gain a competitive advantage. Software application development practices are rapidly evolving as organizations experiment and learn what works and what does not. No- and low-code development platforms are empowering business users to develop solutions for themselves, which can give organizations the potential to rapidly scale new solutions. The core challenge for IT administrators is enabling growth while maintaining governance and control. This blog will provide an overview of the Power Platform Centre of Excellence and how it attempts to address this core challenge.

What is the Power Platform? The Power Platform is a collection of four applications; Power BI, Power Apps, Power Automate (Formerly Microsoft Flow), and Power Virtual Agents. As the name suggests, each of these applications is geared toward Power Users. Power BI is a business intelligence platform, Power Apps is a low-code application development platform, Power Automate is a business productivity automation platform, and Power Virtual Agents (the odd child of the bunch) is a no-code chatbot platform. A core advantage of the Power Platform is its out of the box integration with Microsoft 365, Dynamics, and Azure.

Diagram showing the applications of Power Platform from left to right "Power BI", "Power Apps", "Power Automate",  and "Power Virtual Agents".  Below this are the technologies that power these apps including "Data Connectors", "AI Builder", "Dataverse".

With these platform applications an organization can gain visibility into the data it generates, automate processes, build applications, and scale their ability to engage with customers. But isn’t that what IT does for us right now? What is different? The core mission for the Power Platform is to empower everyone in your organization with the tools that were once exclusive to professional developers. It is a completely different paradigm for IT governance when applications can span from personal use to enterprise mission critical systems, both of which are developed and managed collaboratively between business users and professional IT staff. Such an environment could be highly innovative and agile but could also quickly fragment into a fragile collection of applications making control of the IT environment extremely challenging. Traditional shadow IT challenges come from unmanaged SaaS application procurement; however, in this case shadow IT infrastructure is being developed by the organization’s users. It is important to mention that not all organizations want to or are able to “move fast and break things”. While it is good for innovation to have areas to freely explore, the core business will require structure. One of the challenges with adoption is the sheer size of these tools and not knowing where to start.

This is where the Power Platform Centre of Excellence comes in. Drawing on the experience from early adopters, Microsoft (with help from Partners and clients) has been able to put together what I want to call a technical governance layer that provides environment and lifecycle management for Power Apps and Power Automate projects. With the framework in place, along with curated learning resources, the platform would reduce the adoption overhead for organizations looking to use the Power Platform to drive innovation in their organization. You cannot manage what you cannot measure, and one of the main contributions that the Power Platform Centre of Excellence provides is a very extensive set of tools to gain visibility of what their organization’s users are working on, deploying, and using.

The Power Platform Centre of Excellence covers six main areas: Architecture, Security, Monitoring, Alerts and Actions, Deployment, and Education and Support. Covering these topic areas any deeper would be a blog (and more!) into themselves. In the current state, the framework represents best practices as they are understood today. I fully expect the platform to rapidly evolve over time as the products themselves are also rapidly evolving.

To get started, there is a Centre of Excellence starter kit that walks administrators through the process of administrative setup, governance configuration, and nurturing the creative environment. For a deep dive, a link to the white paper is here. Choosing to work within the Power Platform ecosystem represents a decision to make a commitment to a learning curve for both administrators and users. The governance focus of the Centre of Excellence can initially feel like the organization is implementing a “Big Brother” form of surveillance over its users. With technology governance it can both stifle and promote collaboration and innovation. Ever waited days, weeks, or months for IT to install an application on your machine? To nurture innovation, the dashboards can be used to connect different groups within the organization who are using the same services or connectors. Likewise, there is no point independently solving the same problem twice, and promoting solutions and learning from others who have already worked on that problem can be a massive accelerator. A Teams channel for Power Users can be a great forum for idea and knowledge exchange.

There is a source of conflict between the ethos of no- and low-code development and IT governance and control. The balance between these forces will be unique to every organization and it takes experimentation, discovery, and learning to figure it out. Innovation is like driving a race car: you must go fast but you also need to stay on the track. Change leaders and evangelists are essential to keep the momentum going through the inevitable ups and downs in the process.

A link to the Power Platform Centre for Excellence website is here

Software Maintenance and the Replacement Trap

Episode 52

If we put off maintaining our vehicles, it is not too long before they fail on us, likely long before their expected usual life. Software systems are no different; with regular maintenance, a system can age gracefully and provide value for decades. The challenge with managing software systems is the difficulty with determining the health of a system. Organizations must balance numerous priorities and if the current state of their software systems is satisfactory, resources will be diverted away from maintenance. This is totally normal; however, if ignored too long, it sets up organizations for what I will call the “Replacement Trap”.

A conversation regarding software maintenance requires an overview of technical debt. Technical debt is a concept in software development that reflects the implied cost of additional rework caused by choosing an easy solution now, instead of using a better approach that would take longer. Technical debt is repaid by spending development time on refactoring the project at the cost of adding new features or functionality. Technical debt can be viewed similarly to financial debt, if not repaid it will accrue interest, making future repayment more difficult. To combat technical debt, software maintenance can come in different forms: corrective, adaptive, preventative, and perfective (more info here).

A classical mouse trap with a large piece of cheese sits on a wood floor

Technical debt has one more dimension worth mentioning: institutional knowledge. It takes time and energy for an individual to learn the inner workings and nuances of a software system and this knowledge requires maintenance too. Without maintenance, knowledge fades away, leaves the organization, or retires. Think of any task that you do every once and awhile: there is always some friction relearning how things worked. This friction scales exponentially with the complexity of the system. It is not uncommon for a piece of technology to be installed and operate untouched for years. When technical debt accumulates, like interest on debt, there comes a point where the organization faces technical bankruptcy and, in many cases, will not know it.

When maintenance is put off for too long, there comes a point when the software systems the organization uses fall into an unhealthy state. Warning signs could be that the organization is still using instances of software that have been deprecated by the vendor; changes are large, cumbersome, and time consuming; analysis on next steps takes weeks/months; a heavy governance layer is required to oversee changes; it has been a long time since new functionality has been introduced; continuous dependence on external vendors; or, the backlog (if it exists) is difficult to organize and manage.

Organizations in this situation have set themselves up for the replacement trap. A breaking point is met, and the only clear and logical answer is to replace the current system with something new. If your organization was not on top of maintaining its existing systems, it is unlikely that it has the institutional knowledge to map the features and functionality from the existing system to a new one. If communicating and coordinating between functional areas of your organization is challenging, imagine how hard it will be to have a successful implementation when giving an external vendor (who probably has little idea how your organization operates) incomplete and incorrect requirements. It is like ordering food in a different language at a hardware store. The patio furniture is lovely and the gas grill looks amazing, but all I wanted was a burger.

But isn’t modernization a good thing? Replacing legacy systems will reduce costs, increase agility, and provide our customers and staff with a better experience. At a high level before implementation starts, proposed replacements will always appear like they will be a great fit. “We need to replace our ERP system”, “this awesome vendor has an awesome ERP system”, “it is just an ERP system, what could go wrong?” Is it realistic to believe that how your organization envisions what an ERP system is, would be the same as a vendor creating a product that is targeted to many other organizations? How focused is the product to your industry? Are you coming from a custom implementation to a product that claims to be configurable for anyone? The greater the translation, the greater the implementation risk.

When organizations fall into the replacement trap, the replacement implementation runs the risk of turning into a trash fire. Results may vary but here are some examples: ballooning of scope and costs as understanding of the previous system is revealed, the new system is implemented but with diminished capabilities due to lack of resources for implementation, or the new system was hacked into place leaving the organization with the same pile of technical debt.

How to avoid the trap or get out of it? A well-maintained system evolves with the external environment, upgrades are planned not forced, and technical debt is understood and managed. For instance, a large part of our financial system still runs on COBOL (an excellent discussion on COBOL), a language originally released in 1959! However these systems have been well maintained and can continue to run for many more years. In many cases a replacement project and the subsequent maintenance is an order of magnitude larger than the previous maintenance budget of the legacy system, if the system has not been replaced, consider how the same resources for replacement could be deployed to pay down technical debt. Prioritize deprecated systems: fragile and insecure systems are disasters waiting to happen. Focus on development lead time, make work visible, limit the amount of work in progress at one time, reduce batch sizes, reduce the number of handoffs, identify constraints, and eliminate waste in the value stream. Develop a knowledge management strategy to build and retain working knowledge of the systems your organization uses and avoid single individual dependencies. Aggressively promote training and skill development of staff. Lastly, a digital transformation is not something that happens overnight, but is realized incrementally.

Embracing uncertainty and learning

Episode 51

Despite advances in science, uncertainty is a constant. If the pandemic has taught us anything, it is that predictions by their nature are subject to change with new data. Far too often I see predictions regarded as truth, stifling the dialogue that should focus on the assumptions rather than the results. We must push the boundaries of what we know and what we think is possible in the pursuit of broadening our awareness. If we embrace uncertainty, we are freed to explore and challenge the status quo. In this blog, I will cover how embracing uncertainty is a critical piece to fostering both a personal and organizational learning practice.

Learning does not stop after you finish post-secondary. In fact, many valuable skills are not taught in typical post secondary curriculum as the technology or processes are so new that the knowledge is currently only found in active practitioners. While there are practitioners that teach on the side, it does take time for post secondary institutions to develop and approve educational content. More commonly this niche knowledge is found through direct experience, self-directed learning platforms, and through an individual’s personal research. The market expects organizations to rapidly adjust and pivot business models, and these needs translate down to the employees. Credentials alone have trouble showing an individual’s ability to self-manage, clearly communicate, and be adaptable. Why are these skills important?

Metaphor image for complexity / uncertainty with wrench used on nail and hammer pounding in a bolt

To prepare for an uncertain future, one must actively seek out new knowledge and skills. In the words of a wise construction site supervisor, “it is good to go to bed less stupid”. Ask yourself, how many new pieces of software have you used in the last two years? What do you know today that you did not two years ago? This worldview can directly translate to the organizational level. What skills do our employees need to focus on? What competencies, as an organization, do we need to gain or improve? What methods should we employ to accomplish these goals and how do we measure progress?

Originally published in 1990 with a second edition in 2006, I find the writings in Peter Senge’s Book, “The Fifth Discipline” to be ever more relevant despite the passage of time and the accelerating technological change we see today. A summary of the core message of the book can be found here. There are two ideas from his book I want to use to close out this blog, they are his idea of “creative tension” and “commitment to the truth”. Below are two excerpts from the book:

Creative tension: Imagine a rubber band, stretched between your vision and current reality. When stretched, the rubber band creates tension, representing the tension between vision and current reality. What does tension seek? Resolution or release. There are only two possible ways for the tension to resolve itself: pull reality toward the vision or pull the vision toward reality. Which occurs will depend on whether we hold steady to the vision.

Commitment to the truth does not mean seeking the Truth, the absolute final word or ultimate cause. Rather, it means a relentless willingness to root out the ways we limit or deceive ourselves from seeing what is, and to continually challenge our theories of why things are the way they are. It means continually broadening our awareness, just as the great athlete with extraordinary peripheral vision keeps trying to see more of the playing field. It also means continually deepening our understanding of the structures underlying current events. Specifically, people with high levels of personal mastery see more of the structural conflicts underlying their own behavior.

When we embrace uncertainty, we acknowledge that it is our duty to seek out the truth, and it is through managing creative tension that we can find a greater understanding of the world around us.

Staying Technology Relevant

Episode 50

I cannot believe it, but here is episode 50! I thought the creative well would run dry, but here we are. It is a good time to reflect on past content and revisit the core ideas that form the foundation for the blog series. Staying technology relevant can be broken up into a few components, being mindful of the external environment and establishing practices that value humility, experimentation, and collaboration.

A driverless taxi takes on a passenger on a European street

An organization uses its core competencies to efficiently use resources (land, labour, and capital) to provide a product or service to the market. Firms thrive when they can produce a good or service that is at a lower cost, better quality, or unique from their competitors. Traditionally, this differentiation came from sourcing cheaper labour or investing in economies of scale. Today, differentiation is nearly always facilitated through the deployment of technology: Spreadsheets, enterprise systems, machine learning, or cloud technologies. Software has consumed the world. What has always remained is the constant acceleration of technological improvement. The acceleration of technological innovation increases the likelihood that an organization will face not just one existential threat, but many over the span of a single individual’s working career.

We live in a world with ubiquitous software automation; however, we sit on the eve of a robotic automation revolution. Autonomous everything will likely sweep the world at a similar pace as the smartphone, a product we cannot imagine our life without. Understanding this titanic shift is essential to staying technology relevant. It is not unrealistic to imagine a world where a consumer makes a purchase directly from the factory and then the products are delivered directly to the front door and with no direct human involvement. Big data and rapidly developing artificial intelligence stand to revolutionize nearly every facet of our economy. While I cannot speak for everyone, I absolutely love that curbside pickup is now commonplace, not to mention the perk of home delivery while working at home. When we take changing buying preferences and robotic automation into consideration the core operating structure of most businesses will need to change. Large retail spaces could evaporate into micro-pickup hotspots, factory to front door delivery, or curbside/drive-through. Many retail business models are dependent on suppliers paying for ideal locations on shelf space, if consumers stop walking into stores, this model breaks down.

It is my view that there are three core values that are essential for an organization (and individual) to build the competencies to stay technology relevant. We all want to feel like we are an expert in something. Becoming knowledgeable and having confidence in one’s knowledge in an area is essential, however, be humble and accept that there is always more to learn. Today’s problems may not be solved using the methods of yesterday. Experimentation is required, as only through this process can new knowledge be generated. Test assumptions and fail fast, learn, and repeat. Lastly, the problems we face are far larger than any single individual can solve and only through working with multidisciplinary teams can organizations remain technology relevant, innovate, and operate both at scale and with precision. I consider these values as part of a practice; the process is iterative and progress is incremental, much like physical training. It is this practice that enhances agility and builds resilience.

Technology does not change the rules of the game, but the game itself. Stay humble and stay curious.

Overview of PIPEDA

Episode 49

Within a generation we have seen an explosion of innovation, bringing enormous benefit, but also new challenges. The digital revolution has transformed how we conduct science, make decisions, and interact with each other. In a survey prepared for the Office of the Privacy Commissioner of Canada, most Canadians reported that they are concerned about how their online personal information could be used by organizations. This concern is not surprising considering the sheer amount of information that is harvested from every device, website, application, and cup of coffee we make. Unless you are a diehard video nut who still holds onto their 12-year-old dumb plasma TV, most people have a Smart TV and are completely unaware of the information it harvests. This blog will provide an overview of Canada’s current privacy legislation and provide some resources regarding its future overhaul.

There are two laws that govern privacy in Canada:. the Privacy Act, which covers how the federal government handles personal information and the Personal Information Protection and Electronic Documents Act (PIPEDA), which covers how businesses handle personal information. It is important to note that Alberta, British Columbia, and Quebec have their own private sector privacy laws, however, they are substantially similar to PIPEDA. Ontario, New Brunswick, Newfoundland and Labrador, and Nova Scotia have additional health-related privacy laws that are substantially similar to PIPEDA.

IMage of a modern outdoor security camera mounted on a pole

PIPEDA outlines 10 fair information principles that businesses must follow to protect personal information. The ten principles are:

  • Accountability
  • Identity Purposes
  • Consent
  • Limiting Collection
  • Limiting Use, Disclosure and Retention of Personal Information
  • Accuracy
  • Safeguards
  • Openness
  • Individual Access
  • Challenge Compliance

On the surface the principles provide common sense guidance on a business’ compliance responsibilities. However, interpreting the law itself is problematic, so much so that legal experts have commented on the lack of clarity in the legislation. Teresa Scassa provides a great overview of some of the challenges with PIPEDA in its current state. In the legislation’s 20-year history, despite some cases going to federal court, no fines have been issued against businesses that have failed to comply with the legislation. Currently, if a business makes reasonable attempts to secure personal information in alignment with the ten principles, they will likely not face any major liabilities under the current legislation (based on my non-legal professional interpretation).

In 2015, PIPEDA was amended to clarify that consent is valid only if it is reasonable to expect that an individual “would understand the nature, purpose, and consequences of the collection, use or disclosure of the personal information to which they are consenting.” This appears to push the onus onto organizations to ensure they have communicated their practices effectively. Yet this has proven to be wishful thinking: the statute lacks the enforcement mechanisms that might make a real difference in encouraging meaningful legal compliance.

On May 25th, 2018, the European Union passed the General Data Protection Regulation (GDPR), a modern piece of legislation that sets guidelines for the collection and processing of personal information, with the strong emphasis on asking for permission and giving the user control over their data. Roughly 275 million euros worth of fines have been issued with the largest going to Google at 50 million euros. With the potential for large fines there is an entire cottage industry providing compliance and advisory services.

Both the Privacy Act and PIPEDA are under review and the Government is soliciting feedback from Canadians. With efforts to modernize the laws, the core areas of discussion are:

  • Enhanced consent
  • Form of consent
  • Simplified privacy policies
  • Technological solutions
  • De-identification
  • Privacy by design and privacy by default
  • No-go zones
  • Legitimate interests
  • Ethics
  • Enforcement
  • Education

Privacy is likely to remain a hot topic for the foreseeable future. Currently it seems that new legislation could be some years away from becoming law, it would be prudent to spend resources exploring some of the touch points in your organization that would be impacted by legislation like the GDPR. A little investment now will pay dividends in the future, as the initial groundwork will have been laid down.

Additional information:

Modernizing the Privacy Act Report

Discussions around Modernizing PIPEDA

Annual PIPEDA Report

Automation Part 3 – Business Intelligence

Episode 48

In part one of the automation series, we explored the adoption of electronic automation, and in part two, we explored the best management practices around the development and maintenance of electronic automation systems. This blog will take a common business process, business reporting, and explore how automation is radically changing the business intelligence profession.

Without reporting an organization is blind. You cannot manage what you do not measure, and it is very clear to business leaders that the data that the organization generates holds powerful insights. From insights comes Business Intelligence, and with it we have the foundation for both strategic and tactical decision making. Typically, business reporting is a task performed by analysts and middle management. I have had the pleasure to have worked with numerous individuals that took immense pride in their spreadsheet-foo to aggregate and analyze financial and operational data. What is changing is that the spreadsheet-foo is being replaced with applications that largely automate most of the workflows. Reporting software is nothing new, but in the past, due to resource requirements, only large, sophisticated enterprises could justify the expense.

Business intelligence line and bar graphs on a computer screen

To understand how the business intelligence landscape is changing, it is important to cover the core processes, extract – transform – load (ETL), that are involved with converting unstructured data into useful information. The process starts with data in a raw form which could be sales, operations, social media metrics, data exports from enterprise applications, data from SaaS applications, or IoT data housed in various forms and locations. The first step is to extract this data from each of their respective sources. The data in its current form will require cleaning and structuring. Assigning/transforming data formats (I’m looking at you dates and times), removing unnecessary columns, filtering unnecessary values, merging and appending tables, creating calculated/conditional columns, extracting values, and much more. Careful consideration of the data structure at the source and its relationship with the transformation stage of the ETL process can have a dramatic impact on performance, development requirements, and ease of maintenance (a topic for a future blog). Lastly the transformed data must be loaded into a space for analysis.

To this day, there are individuals copying CSVs, merging data from other spreadsheets, and VLOOKUP-ing their way to a final report. It is not uncommon for individuals to spend a few weeks every quarter preparing such reports manually. However, with these systems it is very challenging to ensure that sensitive data is only available to those with the proper permissions. Additionally, for the organization to be agile, the time between data generation and conversion to insight should be as close to real time as possible. Waiting several weeks to make a correction could be a significant lost opportunity.

PowerBI

There are numerous tools that can completely remove the tedium of manual report building. Amazon Redshift, Google BigQuery, and Azure Data Factory are powerful ETL and data warehouse management platforms. Typically, these tools are for professional developers and data engineers. The critical shift is the democratization of data analytics to more users through applications like Power BI. In the past, I faced a roadblock where I did not know how to parse JSON files. Then I discovered the Power Query engine in Excel; as a non-developer, my ability to work with data grew exponentially. It was a magical moment. The mixture of UI development paired with access to the application code is a gateway for Power Users to move into more sophisticated development techniques. A current trend is the increasing demand for individuals with strong business backgrounds coupled with the knowledge to develop reports with applications like Power BI. A natural evolution from analysts using spreadsheets.

With applications like Power BI, weeks of work can be replaced with a single button click (and even that button click can be automated). If we recall from the first blog in the series, human energy is replaced with computation; the ETL engine frees up the report writer to spend more resources connecting to more data sets and performing deeper analysis. Likewise, the analysis intensity is increasing with visualization app spaces, AI inference and insights, and the ability to ask questions from the data. The shift from human inputs to computation is dramatically scaling the insight productivity of the individual, and leading organizational insight driven agility.

Automation Part 2 – Managing Software Automation Technologies

Episode 47

In our last blog, we explored the adoption of electronic automation. In this blog we will explore the best management practices around the development and maintenance of electronic automation systems, in the context of software systems.

Technology advances are exponential over time. With low- and no-code application development platforms experiencing significant growth over the last few years, development of software automation systems is changing. Increasingly, non-professional developers, (Citizen Developers and Power Users), work with these application development platforms to develop automation solutions with little to no supervision from IT departments. This empowerment of business users can be beneficial to the organization as the individual’s closest to the problem are responsible for development. However, it is paramount to understand that these applications are software systems that must be managed no differently than an application developed by a traditional IT team. These systems must be maintained, and knowledge of their working structure must be institutionalized.

A man taps a virtual button on a dashboard touchscreen from behind (metaphor for automation)

Like mechanical systems, software systems require maintenance to function optimally. Maintenance can come in different forms: corrective, adaptive, preventative, and perfective. Corrective maintenance focuses on the discovery and removal of errors or faults within the software application. Corrections could be made to the design, logic, or to the code itself. Adaptive maintenance is done when the environment containing the software changes. Changes to the environment could be the operating system, hardware, or other software dependencies. This is very common and adaptive maintenance is particularly important since the frameworks and platforms the software is built on are not static and receive updates that may impact the security and functionality of the software system. Preventative maintenance refers to changes made to the software to extend the useful life of the application. Replacing software applications can be both risky and expensive, depending on how critical the application is to the organization, and preventative maintenance to perform updates ensuring the application remains stable, understandable, and maintainable is a prudent decision. Lastly, perfective maintenance focuses on improving (or perfecting) the user experience, adding features to meet new needs, or removing features that are not effective or functional.

With software automation development split between traditional IT and citizen developers there are elements beyond maintenance to consider. Traditional IT professionals must always be learning: the languages, frameworks, and development methods a professional developer used a decade ago are likely not in use in new development today. No and low-code development platforms are also always in flux, with the rate of change seen in these platforms so quick it is absolutely insane (mostly in a good way). Business users that are splitting their time between their normal role and development are on the same skill treadmill as professional developers, constantly requiring professional developers to learn new skills. To maintain the solutions, they develop; citizen developers must take the time to maintain their development skills.

Lastly, a persistent challenge for both professional and citizen developers is maintaining working knowledge of the systems deployed within the organization. Speaking from personal experience, looking at code that I have not worked with for several months is much like visiting an archeological site. Before I can get back to work on the code, there is an extended period of trying to determine what state of madness I was in while working on that section last. Exploring old code developed by a different person is more like visiting an archeological site on a different planet. It is critical for developers to include an appropriate level of relevant documentation and implement a corporate wide development governance structure to reduce the confusion that can occur when different developers are left to their own devices. There is no shortage of stories of pro and citizen developers leaving an organization and crippling the infrastructure while others try to piece together the system they built.

The next blog in this short series will look at a common automation problem: organizational reporting.