IoC detection experiments with ChatGPT

IoC detection experiments with ChatGPT

ChatGPT is a groundbreaking chatbot powered by the neural network-based language model text-davinci-003 and trained on a large dataset of text from the Internet. It is capable of generating human-like text in a wide range of styles and formats.


ChatGPT can be fine-tuned for specific tasks, such as answering questions, summarizing text, and even solving cybersecurity-related problems, such as generating incident reports or interpreting decompiled code. Apparently, attempts have been made to generate malicious objects, such as phishing emails, and even polymorphic malware.


It is common for security and threat researches to publicly disclose the results of their investigations (adversary indicators, tactics, techniques, and procedures) in the form of reports, presentations, blog articles, tweets, and other types of content.



Therefore, we initially decided to check what ChatGPT already knows about threat research and whether it can help with identifying simple, well-known adversary tools like Mimikatz and Fast Reverse Proxy, and spotting the common renaming tactic. The output looked promising!





detection experiments chatgpt