As an educator, my quick take on AI is that it is mostly annoying.
Why? Two reasons.
First, when I try to use it to create materials for class, it doesn't actually save time. It's okay for brainstorming and offering me many options for what I might do. However, the end products are never complete solutions. They require me to edit and refine anything it offers up and are frequently inaccurate.
Second, it dilutes student thinking. I can usually discern when a student copies and pastes an answer from ChatGPT or when a student gives a response directly from a google search because the word choice and sentence structure don't match with the way the student speaks in class. A student with basic writing skills will be able to refine what AI spits out into their own voice. A valuable part of writing is the struggle of sentence construction. Eh... maybe that struggle isn't valuable? Maybe it's just slow. Sometimes I'm slow in my sentence construction, but at least I know it's mine. Does AI foster the loss of or the depreciation of independent thought? Is this dangerous? It doesn't seem dangerous to me, but I didn't grow up with the temptation of a machine doing my thinking for me during my formative years. (I do remember wishing it was possible for a computer to write my paper for me.) Will future generations be able to distinguish their own thoughts from the sentences that AI spits out? Probably (as long we don't all put interfacing chips in our brains.)
Anyway...
I read an interesting article today in Yuval Noah Harari warns, "Never summon a power you can't control."
Harari thinks AI may be dangerous for reasons not involving Terminator-style destruction, but instead by highlighting divisions between world cultures, destabilizing global financial systems, weaponizing personal information, or accelerating defensive tactics to a point in which we might annihilate ourselves via if this then that logical traps.
Eh... I still think it's mostly just annoying. What do you think?