2019 will be the most security-conscious year yet, with the general public more concerned and enterprise spending on defense and research growing relentlessly. However, for innovative and emerging technologies, security will also present new challenges to organizations large and small. The status quo of speed and functionality being chosen over security to address competition is highly likely to continue, and boards and corporate leaders need to be continually convinced as to the importance of security as part of strategic business objectives.
What is becoming more of a challenge? What is alleviating some of the burden on enterprises? For those looking forward, and for those looking to find or build solutions, these are the cybersecurity trends to watch for 2019.
Necessary Speed and Agility Changes Aide Increasingly Secure Rapid Delivery
A house built on a shaky foundation will not stand up for long, just as software cannot truly be secure in an operational state if it was not originally built securely. Unfortunately, the extra time and effort that is needed to create secure software from the ground up is still a hard-sell.
In an ideal world, developers would all be security experts who coded everything as securely as possible, and management would understand and accept the need to spend extra resources to achieve a secure operational state by design. We know this is not the case today, so DevSecOps is one agility concept that helps incorporate security during development, without “tacking on” security at the end of a release cycle.
By accelerating the security audit process, secure development can be validated for assurance much faster. Through automation and the DevSecOps model, security is thus better able to keep up with frequent iterations with increased assurance.
Another concept that can help maintain speed and agility is not new, but it also is not frequently used in most industries: Design thinking.
Rajat Mohanty, Co-founder and CEO of Paladion, recently wrote in Forbes that “design thinking places humans — not technology — at the center of both a problem and that problem’s potential solutions.” Mohanty also states, “Design thinking tells us to seamlessly blend cybersecurity controls into a user’s environment and to pay particular attention to smoothing out any complications or personal considerations that might complicate adherence. It takes these concerns seriously and designs a solution that corrects them, instead of wishing users would just follow technically perfect security controls that never survive contact with the real world.”
With the 2018 Verizon Data Breach Incident Report (DBIR) stating that phishing and pretexting represent the start of a whopping 93% of breaches, adequate consideration of the human element of a solution should be at the forefront of software and application designer’s minds.
Additionally, using design thinking when implementing DevSecOps focuses on the internal users -- the developer and the DevOps engineer -- to ensure their priorities (delivering working solutions on-time) remain possible. If we do not empathize with their goals, they will not embrace our changes in the realm of security.
Shifting Boundaries Between Employees, Suppliers, and Customers
As more and more businesses move to cloud-based solutions, the boundary between employee, supplier, and customer has never been so thin. Formal organizational structures with internal teams managing business applications are unrecognizable in today’s businesses: Who plays what role and with what authority? What access is really needed? How are granular access controls being managed? Who is really responsible?
Next-generation supply chain management is imperative to business success and breach prevention. Better and more frequent validation of third-party controls is needed. An organization’s supply chain is only as strong as its weakest link, so understanding and managing security risks associated with suppliers is more important than ever.
Ensuring third-parties (in addition to the workforce) are able to efficiently do their jobs -- while still maintaining an acceptable organizational risk profile -- will be a formidable challenge.
According to Gartner, solutions like Data Loss Prevention (DLP), Identity governance and administration (IGA), and Identity and Access Management (IAM) will help organizations cope with a shifting “trusted user” landscape.
“Web App Security Testing” Is More Important Than Ever
From groceries to home genetics test kits, a significant portion of the data we send and receive is via mobile apps and websites. In 2015, Gartner reported that ‘75% of cyber-attacks and Internet security violations are generated through Internet applications.’ The application layer is the component most exposed to attack.
Many of these apps are written by individuals, or small companies, who are more concerned with generating users per day than with user-facing vulnerabilities.
Touching on the DevSecOps concept, not all security testing can be automated. Web app testing tools are difficult to keep updated and do not catch everything. Code review tools are akin to “spell check” or “signature-based analysis”; they are not infallible, but automated testing is still necessary.
Increased resources will need to be spent on security testing to keep up with the speed and agility of most iterative software development life cycles.
Security Awareness Training Evolves Beyond a “Once-Yearly 30-minute” Course
Employees don’t need more security awareness training videos, they need to be exposed to better security awareness programs that provide regular, positive reinforcement via “teachable moments” to improve security user practices.
Security awareness is about breaking bad habits. Positive reinforcement for correct behavior, more frequent testing at random times, and a combination teaching approach that includes in-person training and exercises in addition to computer-based training/web-based training (CBT/WBT) are all needed to make users sit up and take notice.
Gamification and other measures to make security awareness relevant and engaging to employees are key to ensuring appropriate actions are regularly taken when social engineering and physical attacks are suspected.
Internal awareness programs must focus on high-risk groups and provide role-based training for employees, contractors, suppliers, etc. Automation can help with identifying and remediating (via training) any failures found through testing. Phishing testing should be conducted more often at non-predictable intervals, and customized spear-phishing testing should be performed for high-risk individuals.
Spear-vishing (vishing = voice phishing, done over the phone) is also a prevalent attack vector, so training and testing should be performed regularly.
Serverless and Microservice Technologies Solve Some Challenges, Introduce New Concerns
The application strategy to “keep it small” by utilizing microservices and “serverless” solutions enable super rapid delivery via “functions as a service” (FaaS) by enabling developers to push code faster and only use resources on-demand. While this might be great for innovation, cost, and speed, these technologies introduce new security concerns.
Protego Labs recently discovered that 98 percent of functions in serverless applications are at risk, with 16 percent considered “serious.” In serverless, functions tend to be provisioned with more permissions than they require more permissions than the functions require.
Excess permissions can be removed to improve the security of the function and the application, utilizing least privilege, and configuring security permissions on individual functions however that takes extra time and an organizational commitment to security at a granular level. Educated developers can overcome this obstacle, but time to release and functional correctness will likely win in most cases over security, and hasty or uninformed development can quickly create a great deal of risk.
The attack surface of FaaS is much larger than traditional Cloud applications, as each function and component is an entry point into the application. Injection flaws top the list of possible vulnerabilities in serverless apps, which might seem like a step backward into OWASP Top 10 territory.
● Increased attack surface: As serverless functions consume data from multiple event sources, such as HTTP APIs, message queues, cloud storage, and IoT device communications, the attack surface induces protocols and complex message structures, which are hard to inspect by a typical web application firewall.
● Attack surface complexity: Right off the bat, the attack surface of the architecture is quite new. Hence it could be a bit of a hassle to adapt and scale for the developers; the probability of misconfiguration is very high.
● Overall system complexity: It is very difficult to visualize and monitor applications developed with serverless architectures as it is not a typical software environment. Hence, proper logging of events and functions are crucial for timely troubleshooting and to respond to security events.
● Inadequate security testing: Security testing on applications built on serverless architectures are far more complex when compared to standard applications. This is why automated scanning tools have not adapted to scan applications developed on serverless architectures just yet.
Like traditional cloud, using functions as a service shifts a significant amount of trust to the provider, so adopters need to consider similar privacy concerns as with any shared cloud environment. Application managers will need to determine the best way to cope with the increased attack surface created by the increased number of functions with direct access to the app.
Baselining and monitoring apps that are running on-demand makes anomaly detection more challenging. Logging capabilities of the current serverless technologies may or may not be adequate for security investigation and early attack detection. If not, serverless apps need to counteract the inadequacy. However, verbose logging is useful for debugging, but it is also useful for an attacker.
FaaS technologies are still cutting-edge, and many questions need be answered: Will developers and DevOps eventually replace Q/A? How can separation of duties be implemented if developers are releasing directly to production? If FaaS application code is being autonomously deployed instantly and repeatedly by developers, what do organizations need to start doing to keep up with securing new code? 2019 will present a new frontier for organizations utilizing this type of application delivery model.
Cloud Access Security Brokers (CASB) Alleviate Some Cloud Concerns, Introduce Their Own Risk
The January 2018 annual RightScale “State of the Cloud Survey” stated that 81 percent of enterprises have a multi-cloud strategy, 96% of survey respondents use cloud technologies, and the organizations leverage an average of 5 different clouds. Over half (53%) of survey respondents had less than 1000 employees.
Managing workloads, storage, and data at multiple cloud service providers (CSPs) is challenging for organizations both large and small.
As cloud adoption grows, so will the use of cloud access security broker (CASB) technology.
A cloud access security broker (CASB) is essentially middleware that sits between an organization’s on-premise infrastructure and its CSP(s). CASBs are a sort of gatekeeper that can help enforce controls like encryption, single sign-on (SSO), monitoring, detection, device profiling, etc. Some also offer traffic inspection.
The goal is to ensure that organizational policies are met as data leaves the premise and navigates to the cloud. A CASB may provide security and/or management capabilities in a single interface, eliminating the need to go to each CSP to perform various functions.
As network boundaries blur more and more, CASB functionality helps provide visibility, security of data, compliance, and defense against threats. However, centralizing security and management into a single broker that manages data across multiple sources also presents its own set of risks.
Central data encryption and authentication represents both a single point of failure and a high-value target to attackers. Having a firm understanding of the risks introduced by a CASB is important if an organization wishes to realize the full potential and gains a CASB can provide. A CASB will become one of the crown jewels of an organization and must itself be adequately protected.
Biohacking, Customized Medicine, and Connected Medical Instruments Threaten our Health and Safety
Cybernetics have long been the fodder of sci-fi stories, but for the last few decades, they have become the new reality.
The biomedical industry has had a string of concerns, from out-of-date vulnerable machinery that is too expensive to replace, to fully digitally connected hospitals, neither the U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA) nor the FDA have addressed the security of devices and implants.
The FDA is stringent on safety, but when it comes to information security, this is a self-regulating industry. Similar to other industrial control systems (ICS), the expensive, outdated medical equipment that was never intended to be connected to a network was not designed for security and offers little to no defense against attack.
Public demand, perhaps fueled by media attention, is going to require an increase in security for these devices. In its April 2018 Medical Safety Action Plan, the FDA said it is “considering seeking additional authorities to require the documentation as part of device makers’ premarket submissions for FDA review”.
The U.S. Health and Human Services (HHS) Health Care Industry Cybersecurity (HCIC) Task Force produced its 2017 “Report on Improving Cybersecurity in the Health Care Industry”, which identified six high-level imperatives, including increasing “the security and resilience of medical devices and health IT”, and “developing the health care workforce capacity necessary to prioritize and ensure cybersecurity awareness and technical capabilities”, neither of which will happen overnight. This means an increase in healthcare security jobs and spending over the next few years.
On the DIY side of healthcare, homemade biohackers are already creating modern technology like this homemade artificial pancreas for diabetes patients. With rapid development kits, anyone can create a variety of very valid or totally quack medical devices. RFID implants are widely used in cats and dogs, but humans now have RFID devices “installed” via injection. Bureaucracy cannot keep up with human innovation in the world of IoT.
As with their larger, more industrial counterparts, the Internet of Medical Things needs to determine what its priorities are and implement security controls around connections to and from the devices, data protection, access controls, and more. The bio-medical industry is another area where Design Thinking could be a useful approach.
Industrial Control System Attacks Continue to Rise
Kaspersky’s present trend study “The State of Industrial Cybersecurity 2018” reports 90% of Industrial Control Systems (ICS) are connected to a wireless network. Perhaps more significantly, 48% of respondents do not have a specific incident response program specifically for these devices. As 58% also face a “major challenge” hiring security staff with the right talent and skills to properly manage these systems, the 31% of discovered incidents in 2017 may be both a low number due to failure to detect as well as a major concern due to the potential financial and safety consequences.
Botnets and malware are top attack patterns with devices throughout an enterprise, with ransomware and denial of service becoming more prevalent.
The “People, Policies/Procedures, and Technology” needed to achieve enhanced defense for ICS take time and specialized knowledge to implement. New solutions to address authentication, visibility, and management across a diverse range of network-connected devices are needed.
Attitudes Towards Compliance Mature
Compliance has repeatedly shown to be an incomplete inoculation against a security breach. Corporate leaders are starting to move away from seeing compliance as an end-state towards seeing it as validation and assurance of an effective information security program being in place.
Compliance is not security, and more organizations realize that doing more than just the bare minimum must be done to create actual security.
Performing a “pragmatic security gap analysis” is the logical next step after the rational realization that compliance is not enough.
Deep understanding of infrastructure and cloud technologies in use, of the users and access needed for data and business functions, and of the interactions of process and system is needed to move beyond a compliant organization to a secure organization.
“Smart City” and IoT Capabilities Grow, Related Attacks Increase
Whether it is the “smart” water meter installed on your home, or a mesh of stop lights that are controlled by big-data systems, smart cities are not the future - they are reality.
IoT and robust cellular and wireless networks have made it possible to improve how cities work. The economy, crime, health care, transportation, and other critical aspects of how we live will be connected and controlled by IoT devices with “smart” capabilities.
Protection against denial of service will be of chief concern, as street light outages and smart parking systems could cause gridlock and effectively shut a city down. Imagine if ransomware attacked the subway system. Power and water outages are unacceptable, so controls and defense for this evolving threat landscape. Information Systems Audit and Control Association (ISACA) cites that the energy sector as most susceptible to exploitation (76%), followed by regional communications infrastructure (70%) and financial services (62%).
Robert E Stroud, CGEIT, CRISC, past ISACA board chair and chief product officer at XebiaLabs stating, “Before our cities can be identified as being ‘smart,’ we must first and foremost transfer this smart attitude to the way we approach and govern the rollout of new technology and systems. Our urban centers have many potentially attractive targets for those with ill intent, so it is critical that cities make the needed investments in well-trained security professionals and in modernizing their information and technology infrastructure.”
Public governments will need to increasingly rely on the private sector’s expertise and experience in building and selecting secure solutions, protecting mesh networks of diverse devices, and to have a funded and adequately prepared response capability for when attacks do happen.
WPA3 Upgrades, Though Needed, Will Not Occur Overnight
Although WPA3 now exists and offers more security than WPA2, legacy wireless encryption will remain prevalent throughout 2019 and beyond.
Router and device manufacturers will provide updates for personal and SOHO “things”, but non-enterprise users will ultimately fall behind in updating legacy devices, non-auto-updating devices, and devices people don’t understand and won’t update themselves. Unless users are forced to update, they will not. And legacy devices that are incapable of supporting WPA3 or no-longer-supported (i.e. lack of support such as vendor patches) will demand WPA2-compatible networks exist for longer than anyone would hope.
Enterprises will face their own challenges when it comes to size and complexity but will inevitably be the first adopters due to the ability to broad control and insight into infrastructures and devices (including managed BYOD). Customers and compliances will ultimately drive enterprises to make the change, which is a good thing.
Adoption will be slow, and backward compatibility will be prevalent for some time. This shines a light on the larger problem of IoT and legacy devices: how they are managed, updated, and tracked to keep up with a changing threat landscape.
Detection of Covert Malware, Cryptocurrency Mining, and Browser-based Command and Control (C2) Make Successful Attacks Increasingly Difficult to Prevent and Detect
Malware is becoming harder to prevent, and once an attacker has a foothold, it’s increasingly harder to detect.
All types of malware are getting better at evading detection. Most anti-virus programs rely on one of several types of identifiers, known as a signature, that is compared against a list of “known signatures” for malicious code. Most anti-virus software programs create signatures based on code that is uploaded real-time during scanning. Other signatures are created when a malware creator publishes code or when the recipient of malware uploads the code to a site like VirusTotal.com.
However, when malicious code can change itself by randomizing its name or changing other identifying features, signature comparison engines will begin to fail at detecting it.
Fileless malware is a type of attack that does not download or install malicious executable files. It lives off the land (aka LOLing) by using exploits, scripts, or legitimate system tools, instead. Anti-virus scanners look at files, rather than threads of legitimate files. Other variants install encrypted registry keys that are unreadable to regedit.exe and other viewing tools. Fileless malware abuses valid executables, launches scripts from memory, and does not leave behind artifacts that would make the malicious code easily detectable.
The Verizon DBIR reflects this trend - at least 37% of malware hashes appear only once, and Verizon notes that number is "being extremely conservative with the data — it's rather likely you won't see a much higher percentage ever again."
According to the Ponemon Institute, 77 percent of successful compromises in 2017 utilized fileless techniques and exploits and this technique is on the rise. Fileless malware is not new, but it has become more widespread. Barkly identified this type of malware as the #1 cybersecurity trend, citing an “ongoing shift away from using malicious .exe files to package and deploy malware.”
After a successful compromise, an attacker needs to maintain communication channels to an asset. Even if a host-based firewall is in place, a web-based command and control (C2) capability can bypass most controls by using the web browser to communicate.
Tools like TrevorC2 by Dave Kennedy, CEO of TrustedSec.com and Browser-C2 by 0x09AL use covert channels to communicate back to a compromised system. This is done via code that is injected into the HTML of every website the user browses to.
Tools like TrevorC2 utilize a “client/server model for masking command and control through a normally browsable website. Detection becomes much harder as time intervals are different and does not use POST requests for data exfiltration.”
Preventing malware is always better than detecting and recovering from it, so better predictive algorithms and analysis is needed. Looking for unusual behavior by “authorized” programs is resource-intensive but could be the only way to defend against an ever-changing, “LOL”-based threat.
Another growing trend implemented by criminals is cryptojacking. Once an endpoint has been compromised, the attacker may hijack the processing power of the device to mine cryptocurrency. Cryptojacking can be done via a web browser, mobile device app, and it has even been seen abusing devices like “Smart TVs”. It’s less risky and easier to deploy than ransomware, and a more lucrative alternative to web-based advertising.
Adguard found cryptojacking scripts on 33,000+ sites with 1 Billion monthly visits and noted that in-browser mining grew by 31% over a single month. Sites and apps do not warn users or get their consent to perform mining. Entire botnets dedicated to cryptojacking have been found, making millions for the botnet operators. Criminals love easy, undetectable money.
Ad-blockers can help prevent cryptojacking, web-blockers can help prevent or respond to malware and C2, but security awareness training continues to be the best defense against all types of malicious code. A well-trained helpdesk and SOC can detect high CPU usage, excessive heat on mobile devices, or cooling-fan failures.
Machine Learning and Artificial Intelligence Technologies Are Vulnerable
Machine Learning and Artificial Intelligence (ML and AI, respectively) are vulnerable to a number of attacks, which will become more critical as security gains digital autonomy.
Computer intelligence either requires training or learning on their own. When a computer is trained, it is reliant on the limited data it is given to make decisions based on the best guess.
An “adversarial input” is a type of intelligence attack that provides an input that is intended to evade detection. A simple example of this is an email that is designed to be undetectable by a spam filter. A more complex example of this involves the use of a disguise designed to defeat facial recognition technology.
The training data itself can be polluted, via a data poisoning attack. If adversarial inputs are included in the training data, then items will be misclassified when detected. If I tell the computer an apple is an apple, and a banana is also an apple, then when it sees a banana, it will identify it as an apple. Bad data in, bad results out.
Web content search engines rely on user behavior and predictive technologies. When a search engine is repeatedly told a search result is irrelevant, your market competitor might be knocked back to page 12 of the search results. Search engine manipulation has been around for a long time, and although defenses have gotten better, ML/AI poisoning will become increasingly problematic as these technologies are used in the many sectors that are starting to use it.
Security’s future relies on automation, from fraud prevention to software testing, to augmentation of already spread-thin staff. Ensuring that the automation itself is not compromised should be of paramount importance for creators and adopters.
Companies Have Insufficient Cybersecurity Insurance
In a 2018 survey run by Ovum and commissioned by FICO, 26% of U.S. companies reported a belief that their premiums are based on inaccurate analysis by their cyber insurer. Cyberattacks are not predictable, like seasonal fires or weather patterns.
Cyber-attacks increase as organizations continue to digitize makes, and a lack of standardized models for predicting costs and occurrence do not exist. Insurance providers and governing bodies will need to start standardizing insurance requirements and practices to adequately cover the needs of customers wishing to offset losses and offset the risk of attack.
As families and individuals start buying insurance for their identities and digital lives, these policies should be accompanied by awareness training and risk reduction techniques to minimize and avoid risk, rather than just transferring it. If auto insurance can provide discounts for immobilization systems, premiums for people who actively use defenses like antivirus and other measures should be possible. The reality of how such measures can be monitored (an immobilization system is not easy to uninstall) does become a privacy concern, so that will need to be considered if any such discounting programs were to be implemented.
Deepfakes (Face Swapping and Voice Swapping Technology) Threaten the Very Fabric of Our Global Society
Another technology no longer relegated to sci-fi, Deepfakes are the scariest fake news of the future. Technology that can perform believable face swapping and voice swapping is here. This is beyond “Photoshopping” someone’s face onto another person’s body; deepfakes are a moving, speaking, realistic rendition of someone else that can be recorded or performed live.
Like most new tech, it’s already being used in the adult film industry. But what happens when bullies get hold of the software and torment their fellow school children? Or when political dissidents impact elections or start a nuclear war?
Platforms like Facebook, Twitter, and YouTube have attempted to thwart misinformation and personalization abuse, but deepfake videos (or live broadcasts) will evade human and eventually machine detection.
Motherboard claims “there is no tech solution to deepfakes”. However, the Defense Advanced Research Projects Agency (DARPA) is halfway into a four-year effort to create deepfake identification tools. Algorithms to analyze biometric data are one promising tool, for example, Satya Venneti of Carnegie Mellon University has been analyzing the pulses of people in deepfake videos, finding "widely varying heart rate signals" in parts of the body that should be identical in spoofed videos.
Professionalization of the Security Workforce
Turning the security arts into science is a topic the industry is a controversial topic.
On the one hand, certifications do not prove competence, and the “3-5 years of experience” required on a job posting can be extremely varied from one person to the next. 3-5 years at a big company is different from 3-5 years at a startup, and 3-5 years as a defender is a lot different from 3-5 years as a penetration tester.
What is the difference between a job and a profession? A common comparison for a “professional” progression is that of the doctor. Special school leads to internship, residency, fellowship, etc. Medical insurance is required to practice. Board certification, wherein specialists in the field examine and certify the qualifications of the individual. No such similar progression exists for “InfoSec professionals
Doctors deal with life and death, health and care. Information security professionals today often deal with matters of national security, services that could affect the power grid, hospitals rely on computing systems and devices (and the services throughout the supply chain) to keep patients alive. Cybersecurity is about life and death in many ways. The argument for standardization and governance of its professionals is strong.
The UK National Cybersecurity Council developed a proposal for regulating cybersecurity practitioners in a document called “Developing the Cyber Security Profession in the UK”. This proposal suggests turning cybersecurity from a job to a profession by suggesting a framework around four specific themes must be delivered by 2021:
- Professional development
- Professional ethics
- Thought leadership and influence; and
- Outreach and diversity
Not everyone supports regulation, however, and the UK wants to develop the body and then make it an independent entity. An “Alliance” of supporting bodies - including CREST, ISACA, and (ISC)2 - support the effort.
However, the argument against this type of regulation is also compelling. Many of the best contributors to the field don’t have fancy degrees (and many are high school dropouts). The same spirit that drives the community of non-criminal hackers to take things apart and question the rules often comes with a contempt for authority and rebellious confidence. The world’s hackers are a force for good (again, note, not “criminals” but “hackers”); they find vulnerabilities before attackers do, share tools and knowledge, and make the world a safer place. We can’t possibly quantify the loss from contributions that would never be made if some of these folks were required to stand up before a committee of people to continue the work they do.
A great example of the problem finding good hackers is embodied by a statement from by the FBI’s former director of its Cyber Security Division, James Comey. A 2015 report explained that only 2000 out of the 5000 persons who applied for jobs at the division would meet the eligibility requirements and stated “the FBI did not hire 52 of the 134 computer scientists for which it was authorized; and five of the 56 field offices did not have a computer scientist assigned to that office’s Cyber Task Force.” Comey addressed this report in the Wall Street Journal, saying “I have to hire a great workforce to compete with those cyber criminals and some of those kids want to smoke weed on the way to the interview.”
As we move forward from security as art into a science, regulators must absolutely be cognizant of individualism, allowing for different paths to be taken to work in cybersecurity. While some will jump at the chance at being considered recognized experts, others will balk at (if not blatantly defy) the idea that any authority should be able to tell them what they can and cannot research.
Regulation can certainly ensure that everyone adheres to certain standards, but this can also prevent and discourage innovation. Any regulation or professionalization of cybersecurity should also allow for radical thinking, experimentation, and provide pathways for talented people who do not follow traditional methods of becoming experts in their fields. Otherwise, the skills gap will continue to grow and the ratio of open jobs to available candidates will continue to become larger.
Topics That Unfortunately Will NOT Be Trends in 2019
Passwords: 2019 will not be the year we get rid of passwords.
Biometrics: A growing trend, seen on everything from computer tablets to construction sites. Alternatives are being worked on, but the kinks are still being worked out. Even as we move into a world that allows you to look deep into the eyes of your computer for facial recognition, older legacy devices will still require passwords for their useful life. Biometrics like facial recognition also do not work reliably when wearing, say, a different pair of glasses, or in the wrong kind of light, so a PIN or password is still often needed when biometrics fail. For now, password managers will be our best solution for generating random, strong, unique passwords for each account.
USB Vulnerabilities: USB security (or lack thereof) still needs to be addressed. Inexpensive hacker tools like the Hak5 BashBunny are useful for attackers because they can emulate a keyboard, serial port, network adapter, hard drive, etc., and execute code faster than a human can type, even on locked machines. Half of people will plug in a USB device they find on the ground, USB Drop attacks will continue to be a successful way of infiltrating an organization. SANS has been reporting on USB attacks as the “Ubiquitous Security Backdoor” since at least 2009, and nothing has really changed in 10 years. To combat this universal and incredibly dangerous threat, 2019 is a year that needs critical controls for USB, but we most likely won’t see them. User education continues to be the primary defense.
Software Engineering Education Ignoring Security: As software engineers get undergraduate degrees, most programs offer limited security as part of the curriculum. A single class in information security is not enough for tomorrow’s software creators and does nothing to help the organizations that will be hiring them. The old status quo for code was that it needed to be on-time and working. The new status quo needs to be on-time, working, and secure. Until colleges and universities start weaving security into the curriculum within every class, and until professors start grading for security on every assignment, software creators will not be entering the workforce with the skills needed to protect our most important assets. The responsibility should not fall on each organization to train its developers and engineers the tenets of basic security. Schools are doing a disservice to their students and the world by failing to adequately prepare the developers and engineers of tomorrow for their significant role on the front-lines of software security.
2019 is going to be a challenging year in terms of being able to detect misinformation and malware. Smart Cities and IoT will bring us advances in our daily lives, from environmentally friendly smart water meters to biomedical technology, we will get to work quicker and be healthier doing it. Security awareness and testing measures will ramp up, as preventing a breach is always better than recovering from one. Supply-chain threats will be an increasing risk. Attitudes about compliance will need to shift to provide actual security for organizations. Humans remain the first line of defense (for now).