Understanding AI Chat Bots with Stanford Online

The news is full of speculation about chatbots like GPT-3, and even if you don’t care, you are probably the kind of person that people will ask about it. The problem is, the popular press has no idea what’s going on with these things. They aren’t sentient or alive, despite some claims to the contrary. So where do you go to learn what’s really going on? How about Stanford? Professor [Christopher Potts] knows a lot about how these things work and he shares some of it in a recent video you can watch below.


One of the interesting things is that he shows some questions that one chatbot will answer reasonably and another one will not. As a demo or a gimmick, that’s not a problem. But if you are using it as, say, your search engine, getting the wrong answer won’t amuse you. Sure, you can do a conventional search and find wrong things, but it will be embedded in a lot of context that might help you decide it is wrong and, hopefully, some other things that are not wrong. You have to decide.If you’ve ever used a product like Grammarly or even a simple spell checker, it is much the same. It tells you corrections, but you must ensure they aren’t incorrect. It doesn’t happen often, but it is possible to get a wrong suggestion.


On the technical side, the internal structure of all of these programs uses something called “the transformer” that looks at input words and their positions. The idea came mostly out of Google in a 2017 paper and has — no pun intended — transformed language processing result ..

Support the originator by clicking the read the rest link below.