Thursday, 25 November 2021 02:59
Possibly the biggest misnomer of recent times is the term ‘zero-trust’, in relation to identity management and authentication of users wanting to access an organisation’s protected resources (computer applications, databases, sensitive documentation etc.). Vendors and industry commentators seem to see the term as referring to the brave-new-world in which current IAM and access control technology is dated and inadequate. This has been accompanied by an inability to describe what zero-trust really is and how it is applied. Zero-trust is not a technology, it’s not a solution, you can’t go to your favourite vendor and buy a bit of zero-trust. It’s a corporate strategy, it’s a reference architecture, it’s a foundational belief. You construct a zero-trust environment by adhering to set of practices that will, over time significantly reduce the vulnerability of your organization’s business operations.

The first step is to ensure a holistic approach to authentication and authorization service. There’s no point in establishing a strong authentication service for webserver applications while leaving the network segmentation relying on high-level group memberships.

Secondly, and yes – this is why it’s a misnomer, use a ‘trust-but-verify’ approach. When a particular data store is used as a source of authentication services, use another mechanism to verify it. This will typically use the person’s smartphone (push authentication for low assurance, facial recognition or fingerprint for higher assurance).

‘Zero-trust’ needs a corporate culture that values security and it requires a least-privileges approach to access control. Nothing new.
Wednesday, 24 November 2021 15:00

Edge Computing is one of those terms that mean different things to different people.

Its genesis is in the Operational Technology world. Typically OT networks were isolated from the rest of the world because 1) they needed protecting and 2) had so much mission critical traffic and low latency data in transit. In order to manage the exfiltration of data it became fashionable to establish a computing device at-the-edge that would ensure only aggregated data left the network and only access to supervisory processes was supported.

A typical Industry 4.0 environment will have a myriad of systems on the network all controlling manufacturing processes with supervisory systems to allow staff to monitor the production environment and receive notification of events as they happen. An Edge Computer allows supervision to occur and the amount of data being communicated back to head office to be controlled. Management don’t need to know how many work-processes have been completed, they just want to know how many finished products have gone into inventory.

Vehicles are another case in point. There is an enormous amount of processing occurring in a car, from the critical control messaging advising various components of their status, to trip monitoring and recording. Real-time external communications is becoming increasingly important as road assets become more intelligent and provide better traffic management to appropriately equipped vehicles. An Edge Computing device can make sure that just the service record is made available to the workshop.

Then there’s the home environment. With increasingly sophisticated devices capable of being controlled remotely, everything from air conditioners and lights, to charging stations and grid feed-in controllers, the home environment is becoming an environment that must be protected and controlled; and an Edge Computing device, typically the Wi-Fi router can help.

There are two main reasons for using an Edge computer:

  1.  Cost – being able to limit the amount of data leaving the network means that the bandwidth requirement is lower and the associated cost is reduced.
  2.  Security – isolating production equipment from administrative interference by providing access to just the data that is needed by management, means operational systems can be protected.

The provision of edge computing devices is a technology whose time has come.

Monday, 09 September 2019 09:01
Australia is finally putting some ‘runs on the board’ in the area of identity information. This is good news because collecting, storing and using identity information is becoming more important for a variety of reasons Australia is finally putting some ‘runs on the board’ in the area of identity information. This is good news because collecting, storing and using identity information is becoming more important for a variety of reasons:
  • There is a heightened concern about the privacy of personal information as we read about another breach or unsolicited sale of identity information.
  • We are becoming increasingly uncomfortable with what’s been called ‘object based media’ which tailors our on-line experience and modifies what we read depending on our interests, gleaned from our searches and ‘likes".
  • Organisations providing consumer or citizen services seek to improve their user's experience by eliminating username and password logins but at the same time try to gather as much information as possible about their clients.
So how has our government stepped up to the task of acquiring, managing and protecting our identity? The federal government’s approach is built around the MyGov initiative. It’s taken many years to get to this point.

The two main agencies working in this space are the Department of Human Service (DHS) and the Australian Tax Office (ATO), each following their own paths for their own purposes. The Digital Transformation Office, expanded and renamed The Digital Transformation Agency (DTA) was charged with managing the differences between the two departments and setting policy as to how a common identity management environment was to be deployed. They considered following the UK Verify model but that was eventually dropped. Political pressure made it impossible for an independently developed identity management solution to be deployed. Common wisdom and multiple consultant’s recommendations suggested a federated environment that would incorporate state-based identity providers such as Queensland’s CIDM service and ServiceNSW so that citizens with a registered identity with their state government would be able to use that identity to log onto federal systems.

But DHS won. The MyGov system is a closed authentication environment that requires service providers to register their application on the MyGov platform. While the platform has been opened-up to support other identity provider services, AustPost being the first, there is little incentive to sign up to any other service if you already have a MyGov account. Queensland is already decommitting from CIDM and focussing on the driver licence and 15+ card registrations that will be incorporated into an identity broker framework.

What’s more, the DTA is now working with the National Exchange of Vehicle and Driver Information (NEVDIS), originally established to facilitate cross-state sharing of driver licence and vehicle registration information with law enforcement officials, to allow them to get access to driver licence information. They particularly want access to the photo so that they can reach a level 3 assurance level; something the ATO wants for citizen access to more sensitive services. Driver licence photos are getting better, in Queensland they meet ICAO standards, which facilitates three-factor smartphone authentication as smartphone manufacturers open up their facial recognition technology.

But sharing of photos is a concern for Australians. We provide photos for the purpose of getting a driver licence, not for a central database to be used for other purposes to which we have not consented. This contravenes Australian privacy legislation.

Facial recognition simply needs a facial (visage) template that measures information such as the distance between the eyes, width and length of the nose, the mouth position and chin shape. This enables facial recognition but not image reconstruction. It’s also a lot less data to transmit and store. It is hoped that this is all that get's contributed to the DTA. It could be argued that since a template is a derivation of the photo it's not captured under privacy legislation.

So – it’s good that the federal government is finally moving ahead with an on-line authentication service, it’s just too bad it’s not a truly federated system, it requires service providers to be exposed via the MyGov environment and it’s hoped that the driver licence application process will soon close the "consent" objection to sharing visage objects.

Oh – on the topic of privacy, Australia has some of the best privacy legislation. It’s a shame the Australian Office of the Information Commissioner (AOIC) has not been funded to investigate and prosecute the many organisations flagrantly abusing consumer privacy every day. We are continually asked for more information than is necessary for the services we’re requesting, and organisations are not deleting information when they can’t be bothered to update it. And most Australian organisations are not capable of responding to a person’s request for access to their data and requests to correct errors (required under the legislation).

Many company privacy policy statements, a requirement under the legislation, are very poor and the number of breaches, with notification finally a legislated requirement, indicates that companies are not safeguarding the data they keep on us.

It’s also a shame that the Attorney Generals Department has not moved ahead with the Cross-Border Privacy Rules (CBPR). We need to plug-and-play in Asia yet we spend more time on Europe’s General Data Privacy Regulation (GDPR). Now GDPR is the gold-standard when it comes to privacy practices, but Asia consists of sovereign states that each set their own privacy regulation, nothing like Europe’s nation states that adhere to a common regulation. Again, AOIC’s role in CBPR needs funding.

So it’s a mixed report card for Australia; we’ve done some things right, we’re finally going to have an authentication system to access federal government services. It’s too bad that I must setup a MyGov account to do so, I can’t use my QGov account.

But that’s the reality we live with - political factions seem to trump logical decisions. 


Tuesday, 22 January 2019 00:00

One of the latest topics to be selected for media-mania is facial recognition. Can we of sound mind and technical education please provide a balance to the self-serving journalists who seek to promote their names through social media hype?

There are three areas of confusion that have surfaced over the past six months:

  • Privacy issues surrounding facial images

There are no privacy issues surrounding facial recognition. There are, of course, concerns regarding the storage and sharing of facial images that persons allowing themselves to be photographed as part of a registration process should question. But facial recognition uses facial templates (sometimes called facial signatures) and does not require transmission or storage of facial images.

  • Concerns regarding CCTV cameras

This item supposes that local councils are mapping our movements when we are caught on cameras in public spaces. The technology is not currently available to do this. It requires one-to-many matching and requires ICAO-grade images.

  •  Comparisons with the Chinese social credit program

Whatever you think of Beijing’s initiative to promote social harmony it has nothing to do with facial recognition – that just happens to be one of the technologies they purport to use. The only issue is whether or not democratic countries want to go down that route.

It’s important that technically competent people help to quell fear-mongering and ensure a level-headed approach as new technology becomes mainstream.

In helping people understand the technology it is important to differentiate between the two main types of facial-recognition, they are vastly different:

1. One-to-one

This is the area in which most change is occurring and where we are benefitting the most from a better user-experience. There are multiple use-cases, for instance:

-  SmartGate immigration stations. These are the automated devices used at border crossings that allow you, if you’re lucky, to enter a country without talking to a border-control officer. They work best in Europe where passports from a wide number of countries are accommodated. There are two steps to the process: you present your passport allowing the system to retrieve your facial template, and then a camera verifies that it is actually you travelling.

-  Windows Hello. After registering your face with your PC, and creating your facial template, subsequent logins will turn on the infra-red camera to verify your facial image even in low light.

This type of facial recognition is the future of authentication. Most new smartphones have strong graphic-processing capabilities and are able to positively identify you to a high assurance level. Many governments and commercial organisations want a higher level of assurance than most PIN-based or push-authentication systems can provide so this type of facial recognition has a bright future.

2. One-to-many

This is usually the type of facial recognition that garners the most interest and criticism from members of the public. It is widely used in criminal investigations where a visual image of an alleged perpetrator can be compared with police files of stored facial templates in order to identify a suspected criminal.

This type of facial recognition takes time and processing power; it is not suitable for authentication purposes. It has been trialled in multiple airports, to attempt to identify people on watch lists or individuals with red flag indicators from leaving or entering into a country. These trials have had very limited success because of high false negative rates.

So what should the technical professionals be recommending to our clients?

  1. When we allow ourselves to be photographed as part of a registration process i.e. obtaining a driver license, we should ensure we are satisfied with the privacy statement of the organisation involved. In most western countries privacy legislation allows companies and government to only collect the information they need for the transaction that a user is undertaking. They can’t collect information that just might be needed in the future or would be useful for their demographic analysis program. An organisation cannot collect a facial template if they don’t need it for authentication; and they can’t ask for a photograph unless it’s needed for the requested business process e.g. application of a driver license. If a facial template or a facial image is collected it can only be used for the purpose for which it was collected. Government cannot use driver license photos to authenticate citizens to government services, unless explicit consent is collected.
  2. We need to identify the current limitations of the technology. Much has been written about the ability to “fool” facial recognition systems with a modified photograph. It seems that a suitably “doctored” image can be used to cause a false positive. We would be remis, as with any authentication mechanism, if we did not assist our clients in identifying situations in which a technology does not provide the required level of security.
  3. Perhaps the most important advice we can give, however, is the potential for facial recognition to radically change user experience in the future. Users of Windows Hello won’t go back to passwords, PINS or fingerprints. Facial recognition is so simple and exceeds most security requirements that it is the future for authentication on PCs and laptops, and it will be the authentication tool of choice on smartphones too, with the FIDO Alliance supporting a facial recognition certification program.

No – passwords aren’t dead, but facial recognition is one more nail in the coffin.


Monday, 24 July 2017 14:11
Most developed countries have enacted privacy legislation with the intent to protect their citizens from bad corporate practices that may either deliberately or inadvertently release their personally identifiably information (PII) to unauthorised persons. While this is obviously a well-intentioned activity it does have a commercial impact. Companies wanting to transact across sovereign borders must ensure they adhere to privacy legislation in the countries in which they do business and individuals providing their PII to foreign companies need to be confident that their private data is being adequately protected in the foreign jurisdiction.

Europe has addressed these issues via the General Data Protection Regulation (GDPR) initiative which harmonises privacy legislation across European Union countries. The main driver for the GDPR is protection of individuals’ privacy. The legislation requires organisations to establish data controllers for repositories of PII and to seek consent for the use of PII within their business processes. GDPR also provides for recourse in the event of contravention of the regulation. Indeed the penalties can be quite severe with enforcement agencies in each country ready to investigate, and if necessary prosecute, those that violate the legislation.

In the Asia Pacific Region the approach has been quite different. It is unrealistic to expect a harmonisation of privacy regulation across countries in the region so the Asia-Pacific Economic Cooperation (APEC) established the Cross-border Privacy Rules (CBPR) system. Countries joining the CBPR must evaluate their privacy legislation against the 9 principles of the APEC Privacy Framework and then provide a mechanism for companies to be ‘certified’ by an Accountability Agent as being compliant with the CBPR.

While both initiatives seek to protect private data they are very different in their approach. GDPR relies on a legislative mandate that enjoins member countries in a prescriptive solution. It is based on homogenised legislation that ensures similar treatment of infractions regardless of where they occur in the European Common Market. By contrast participation in the CBPR system is entirely voluntary, it is based on self-assessment with 3rd party verification. It relies on negotiated settlement of alleged contravention and imposes no restriction on member countries regarding their local privacy laws. In order to participate a country must have enacted privacy legislation; it is a pre-requisite because member countries must map their local law to the CBRP Privacy Framework as a step in their application to join the initiative. Some Asian countries are not in a position to consider CBPR because they lack the legislative framework to participate.

So – GDPR is predicated on tight coupling between member states that enables a strong legislative response to the task of data protection. CBPR accommodates a loose coupling of member countries imposing a framework that enhances cross-border trade and provides some recourse for individuals in the case of privacy regulation contravention by a foreign participant.




Program Characteristics

Tight-coupling of European member states

Loose-coupling of APEC member countries

Legislative Framework

Prescriptive, based on a single privacy legislation

Guidance, accommodating multiple privacy laws

Recourse for contravention

Punitive, with significant penalties

Negotiated, with local agreements for redress

Table 1 - Comparison GDPR & CBPR

While GDPR and CBPR, by necessity take different approaches, both serve to raise awareness of privacy issues and raise trust in the Internet as a vehicle for digital commerce.