I am fairly confident that thinking and creative humans will not be swept away, but may change the things that they do, eg move up the abstraction stack, some of the time. (I will still be hacking shell scripts on my RPis and successors by hand until those hands are nearly cold and dead...)
Yes, lifelong learning is an enabler for that, which is why I am doing a part-time PhD.
I am fairly upset about the bullshitting and general cheating and dehumanising that the current hyped 'AI' is enabling. There is good AI stuff going on: we don't need to be using it to destroy the planet and people's sense of self-worth.
I am fairly confident that thinking and creative humans will not be swept away, but may change the things that they do, eg move up the abstraction stack, some of the time. (I will still be hacking shell scripts on my RPis and successors by hand until those hands are nearly cold and dead...)
Yes, lifelong learning is an enabler for that, which is why I am doing a part-time PhD.
I am fairly upset about the bullshitting and general cheating and dehumanising that the current hyped 'AI' is enabling. There is good AI stuff going on: we don't need to be using it to destroy the planet and people's sense of self-worth.
Thanks for your perspective!
> Yes, lifelong learning is an enabler for that, which is why I am doing a part-time PhD.
I'm thinking about that, too!
> I am fairly upset about the bullshitting and general cheating and dehumanising that the current hyped 'AI' is enabling.
Oh man, i am, too. And i have the fear that enshittification will only increase.
> we don't need to be using it to destroy the planet and people's sense of self-worth
We absolutely do not, but i guess they will, because it promises profits.