Breaking the hype cycle of AI in cybersecurity

Breaking the hype cycle of AI in cybersecurity



Every few years in IT, there’s a new buzzword or phrase that seems to grip the imaginations of both the technology press, but also the wider public.


Around a year or two ago, the talk circled around blockchain. Now the phrase on everyone’s lips is “AI,” and it seems that every product coming to market ships with some form of intelligent, self-learning machine at its core.


Often those claims are spurious but can be justified by the broad terms that define artificial intelligence. Whether or not your new household appliance does or does not teach itself how to behave more responsively to your particular behavior is not of interest here. Still, it is worth paying attention to claims of “AI power” in mission-critical enterprise systems, especially those that are coming to market in the cybersecurity field.


Without getting into the detail of TensorFlow, PyTorch, SINGA, or Caffe, it’s probably worth defining what we mean by artificial intelligence in the cybersecurity function. Firstly we can safely assume that vendors’ claims of AI in their products are indeed valid — after all, it’s practically impossible to pull the wool over the eyes of professional cybersecurity operatives, whose grasp of networking and computing technology is about as deep as it gets!








But there’s a deal of variance in the areas in which cognitive routines are deployed. No vendor is claiming that its new product line can accurately predict the next move made by hackers and prevent it, and the industry is thankful for that. Instead, AI appears to be taking hold in certain areas, where the technology is proving its worth and forming useful additions to the inventory of tools, methods, and processes that can help protect the enterprise.


Mapping the standar ..

Support the originator by clicking the read the rest link below.