Unsecured data in the world: By Default Security

In the last month, there are a couple of stories about the invention of unsecured sensitive data in “the cloud.” for instance, Microsoft has been within the news recently thanks to a possible accidental leak of 38 million of their customers’ data records thanks to a configuration that defaulted to unsecured intentionally. Around the same time, what discovered the review website Senior Advisor to possess misconfigured their instance of the AWS S3 cloud storage service leading to an exposure of about three million identifying records. In another case reported this month, Japanese company Murata learned that a subcontractor had — against company policy — uploaded quite 75 thousand documents containing sensitive customer data to a cloud provider.

While each case is exclusive, there’s a standard thread here and in other similar stories. As companies expand their data transformation activities beyond the confines of the interior IT “castle,” we all got to be diligent about where instances of company data are often found and who is accessing them. And one among the keys to success is counting on platforms that provide extensive monitoring and an upscale set of alerting mechanisms.

The lesson in choice architecture

To be clear, these companies didn’t begin to make systems that expose customer data. However, it’s possible that their product’s default configurations for data access and publication, alongside confusing user interfaces, contributed to cases leading to the broad public exposure of customer data to anonymous access.

How systems are designed and how they behave “by default” may be a choice made by the people who create hosting products and services. And people’s choices, by extension, can become the policies of all the businesses that use those systems. Our choices matter.

It’s hard to guard what you can’t see.

As companies embrace new “information democratization” patterns like low/no-code platforms and robotic process automation (RPA), significant tradeoffs are affected. What can quickly curtail old command-and-control governance patterns, reliance on entry barriers, and reliance on big-up-front design? In some cases, established governance and review might bypass new onsite tools and cloud-based services that provide an independent, streamlined data access and publishing workflow. Unfortunately, it is often hard to ascertain these security challenges since they arrive as a part of a replacement wave of positive changes when agility is advertised as a precious advantage during a competitive environment.

IT leadership must have “eyes on the field” to understand when new products are brought into the corporate. For example, security reviews for brand spanking new low-code data systems got to include a careful review of common data connector patterns, typical publishing workflows, and default security settings. Any new public endpoints — whether through local gateways or hosted remotely within the cloud — got to be added to periodic penetration testing and regular data reviews. And every one active service got to be continuously monitored for unusual traffic and access patterns. All this alongside your organization’s regular data management and threat reviews.

It seems most of those actions (monitoring traffic, periodic pen testing, etc.) isn’t complicated work. It requires diligence and dedication both on the part of those implementing the testing and people analyzing the results. However, a key challenge is usually knowing where to seem. It’s hard to guard your company, employees, and customers from data exposures you can’t see.

Observability is that the key.

As systems become more decentralized, as more “Mulesoft developers” engage in creative and innovative practices with the corporate, there’s a requirement to generally extend the extent of observability, which means getting more information about who is doing what together with your data.

Often, you don’t get to prevent people from accessing data sources within the organization. You likely want to enable innovative use of your internal data — that’s what these new agile, low-code tools are all about. What’s needed may be a higher level of data about who is accessing and publishing data. What is often another “control plane” that will alert IT teams when new sources go browsing and permit teams to review the new connectors and find out continuous monitoring of their use?

Quality data publishing systems have this alerting and monitoring inbuilt as a core feature. They include the power to assess potential data publishing leaks and dangerous data consumption patterns. With power comes responsibility. A reliable platform partner helps you protect your data, not just make it easy to publish on the open web.

The choice is yours

No company wants to be named during a story about mismanaged data access. Nor does any organization want to possess its reputation tarnished during a legal tangle over who is liable for any damages thanks to a knowledge breach. Wo, with little to no margin for error, CIOs are advised to specialize in partners that eagerly and vigilantly aspire to be secure — ones that make it easy to try to do “the right thing” and difficult to export or expose sensitive data mistakenly.

As we build more composable systems, more “plug-and-play,” we all run the danger of making solutions that are harder to watch and, therefore, harder to manage. As a result, selecting partners and vendors take on replacement importance when the tools you’re enabling are general components designed to be used by a broad spectrum of stakeholders, including internal teams and external partners.

An essential lesson in these recent news stories is that we are all liable for the alternatives we make. We elect our vendors, elect our platforms, elect our levels of monitoring and observability, and choose the internal processes that teams follow to successfully access and publish critical information on platforms both within and out of doors the corporate firewalls.

Now is an excellent time to speak within your teams about the state of knowledge publishing across your company. To verify you’ve got a good initial review and ongoing monitoring in situ. Now’s not the time to tug back on enabling citizen developers and innovation inside your organization. Instead, you’ll prefer to both protect your data and empower your staff to satisfy the challenge of enabling speed and agility at scale.