The need for audit in CyberPatriot VII

This open letter is intended for discussion between the CyberPatriot Operations Center (CPOC) and other Coaches and Mentors involved in CyberPatriot VII (2014–2015). As such, I will not elaborate on the competition or its goals and objectives—available on the organization’s website.

First, let me acknowledge that putting on a national high school cyber competition must be extremely difficult and something that is far outside my capabilities. I have put together smaller training exercises and I teach a cyber warfare and terrorism course at San Diego State University, so I know a little something about cyber, education, and pedagogy. Ryne and the team at CPOC are to be commended for their efforts.

What I would like to outline today are some accountability and audit issues that I see in the current CyberPatriot framework, and four concrete proposals to improve the situation. I welcome and encourage your feedback! Medium.com allows for convenient and focused commentary by clicking the + icons just to the right of each paragraph so your comment is contextually linked (on mobiles, tap on the paragraph itself). Let’s get started!


Disconnect between Learning and Competition

I have been involved with CyberPatriot V and VI as a technical mentor for various teams. Both years, the teams qualified for National Finals. Both years, I was frustrated that there was a stronger focus on competition secrecy than STEM and cyber education goals. Along with our team’s coach, we asked for greater guidance on training objectives. Most of last year’s training website included links to external blog sites. This year, the training materials are improved, but a training objectives (or task list) for technical mentors is still missing.

We can have our competitors review PowerPoint slides for hours, but I prefer to teach with hands-on instruction and interaction. But what shall I teach? The guidance is scarce: “account management” and “malware” are listed after some rounds of competition. What are the concepts that a successful mentor should endeavor to teach a competitor? You would not expect a national academic language testing service to list “verbs” and “nouns” as items and leave the curriculum to widely vary between teachers. “Account management” could mean any number of things and the way in which the competition is scored (more on this later) could determine whether one style of account management is awarded points and another (equally valid) style is neglected. How can we guide our competitors towards the ideal (approved) account management best practice, if we are not told what they are?

Proposal 1: Provide a detailed Educational Objectives Task List for Technical Mentors

By having a detailed EOTL, mentors can structure the limited time they have with competitors around proper “lesson plans” and craft hands-on exercises with practice virtual machines to provide dynamic interaction to students. A mentor will know he/she has completed their duty of educating once a student can demonstrate knowledge mastery of the detailed concepts. Example: know the difference between a user account that has been locked out by failed password attempts, and an account that has been disabled. These tasks can be coded and grouped, such that when a competition round has concluded, the published overview of the round can include scoring breakdown tied directly to educational objectives (e.g., “10 points for AM.3.2") so that mentors and coaches know precisely where their teams need assistance. This does not reveal answers or encourage rote memory. This is a solid pedagogical foundation that ties assessment to objectives.

SysAdmin Styles and Best Practices

The beauty of modern network operating systems is that they are so flexible. Linux in particular offers a bevy of methods of achieving the same end goals to secure the system. I am concerned about scoring on Linux systems because of this flexibility. I do not know how scores are computed. I do not have an answer key. But I do have a suspicion that if a competitor sets “MAX_AGE” in login.defs to 60 and the scoring system was looking for 30, will the competitor be awarded points? Should they be awarded points? When does a best practice become an edict?

COMPLEXITY. In Windows operating systems, there is one local security policy to enable “Require Complex Passwords”. This boolean value is simple for a scoring system to check in the registry. In Linux, a pam_cracklib module must be added and configuration lines added. Parameters on the configuration line can appear in any order. Is the scoring system flexible enough to handle this? We have no way to audit that.

LINUX. The parameter minclass=n instructs the module to require ‘n’ classes of characters (lowercase, uppercase, numerals, and others). By setting this to 3 or 4, one could subjectively consider the system to “require complex passwords.” But what if the scoring engine is looking for the alternate parameter style (ucredit, lcredit, dcredit, ocredit)? This parameter format offers greater flexibility between the mix of character classes, but certainly both types achieve the end goal of “requiring complex passwords.” When does a style (minclass or xcredit) trump a best practice (complex passwords)?

WINDOWS. In this month’s practice virtual machine, the HomeWebServer software is listening on port 80 and providing HTTP server capabilities. What is clever about this executable is that it does not install as a service but rather a persistent binary. This is a great example of CPOC testing critical thinking with competitors. They have been trained to disable IIS through the Services control panel. But the README guidance said that the workstation should not provide any server or file sharing. The leap of knowledge is for competitors to see port 80 listening (perhaps through tcpview or netstat) and stop the sharing. If the HomeWebServer is uninstalled via the Add/Remove Programs control panel (as instructed by the training materials), the listening services on port 80 are stopped. However, no points are awarded. It is only after removing the directory C:\Programs\WebServer that points are awarded. When does a style (uninstall versus delete directory) trump the end goal (do not provide web services)?

Proposal 2: Scoring audit committee

Establish an independent audit committee made up of Technical Mentors that can provide feedback to CPOC about the actual method that points are awarded (e.g., the regex used to parse config files, the registry entries consulted). This proposal ties in with number 4, discussed later.

Answer Keys for Competitions Rounds

This is the most controversial topic and has earned me vitriol from some staffmembers. CyberPatriot took a huge step forward by providing detailed answer keys along with practice virtual machines for the month of September. This could not have been easy to compile, and I thank them for this move towards greater transparency. But it should not stop with practice rounds!

Our team’s coach and myself brought up this topic during the brief tail end of the Coaches’ Meeting earlier this year that wasn’t part of a 45-minute PowerPoint session. Rather than consider the request, it was flatly denied by pointing out that if the answers were published, other teams would know what to do in future rounds.

I was flabbergasted — certainly education is the overall goal for high school youth and the competition is the vehicle through which the education is delivered, no? As an organization, should we not hope for there to be learning opportunities with every round?

An analogy to the NFL was offered in that experience in the Superbowl benefits teams in the future. The analogy fails because in the NFL, the referees explicitly state the infraction and teams are allowed to use video review to challenge the call.

Another participant in the meeting pointed out that there are hundreds of ways of staging a vulnerability on a machine; providing the answers to a particular round should not prevent or invalidate any future vulnerabilities. This was also flatly removed from the discussion and not considered. I would hope that CyberPatriot VII has a more open attitude to new ideas and I believe this to be true, given the new training materials and practice virtual machines.

Proposal 3: Answer Keys for Competition Rounds

By providing an answer key after competition rounds, Coaches and Mentors alike can examine areas where their education is lacking, areas where outdated or non-best practices are being taught, and improve their own capabilities. Certainly this document would be highly restricted and the adults would be honor-bound to not share the answers with competitors. But the point is that knowing the answers to round ‘x’ will not (and should not) equate to an invalidation of round x+1.

In addition to the opportunity for Mentors to improve themselves, an answer key would provide a very open and transparent method of auditing the competition that is currently scored in secret. A competition without scoring transparency leads to questions about accuracy. Look, everyone makes mistakes — myself, daily! Is it possible for CyberPatriot to make a scoring mistake? Certainly. But how would we know about the mistake without an answer key?

Incorrect README

Perhaps an anecdote will be of interest. During the Summer, exhibition rounds were held to spark interest among competitors and allow Mentors to get some experience with otherwise restricted virtual machines. The Round 5 Ubuntu virtual machine had a README that read, in part:

This web server is for official use only by authorized users.

One would imagine that Apache, the web server, is a critical service. Later in the README the competitor is told that SSH and Samba are critical services and they must remain online. No comment is made about Apache or to regard all other services as non-critical. We train our competitors to approach the README as they would an SAT word problem — to extract meaning from every sentence and to use critical thinking to “read between the lines.” Here, I believed (as did my Coach) that the implied message was that Apache is required but not explicitly mentioned in the list of critical services so that a competitor would need to make the leap of knowledge that Apache must be critical if this is a web server.

I was mistaken. If you disable, uninstall, or stop the Apache service, you are awarded 7 points! As you can imagine, this is an obvious mistake (or a typographical error, if the term suits you better). If this were an SAT question, The College Board would have to throw out the scores and grade as if the question was not offered. When reporting the error to CPOC, we were informed that an announcement was made on the TechChat system and that had this been a competition round, an appeal could be made to rectify the situation.

But precisely how would a Coach or Mentor ever know about this situation without an answer key? It would only be the competitors that accidentally (and incorrectly) disabled Apache that would stumble upon the inappropriately awarded points. Certainly they would not be incentivized to report an error that would decrease their points. The team with competitors that strictly adhere to the word of the README and read between the lines to determine nuanced meaning would miss out on 7 points and never know about it.

If this happened during an exhibition round, is it safe to say there is a non-zero possibility that it has previously happened during competition? I am not making an accusation — I cannot without an answer key. But I am positing that there exists that possibility. Why not provide some transparency to the scoring (restricted only to non-competitors)?

Proposal 4: Mentor Rounds

As with most negotiations, I have a compromise. I still fervently believe that an answer key should be provided on a restricted access basis, but can appreciate that this document takes a great deal of time to compile. Apart from exhibition rounds during the Summer, Mentors are unable to use competition virtual machines. What I propose is that for 2 days immediately following the competition round (so that there is no confusion about whether a score is from a competitor or a mentor), the scoring system is kept online and teams are allowed one six-hour period to have a Coach and/or Mentor use the virtual machine with all the scoring features enabled.

This would enable a number of objectives: [1] Mentors would be able to self-assess whether they have the knowledge to properly train the competitors; [2] a partial audit of the scoring system could be performed; and [3] appropriate appeals can be raised by teams that believe points should have been awarded. Rather than add tasks to the CPOC team to create an answer key, they would just have to leave the scoreboard server online for an additional 48-hours.

Conclusion

If you have reached this section, I owe you a great deal of thanks and appreciation. I have been frustrated by these topics for more than a year. I am happy to have an open forum like this website to express them and for the large manifesto of proposal to receive your scrutiny. Once again, I encourage and invite your feedback. The main reason I posted this on medium.com instead of an email was so that your feedback can be tied directly to individual paragraphs. Thank you for your time and consideration.

Show your support

Clapping shows how much you appreciated Steven Andrés’s story.