It's an interesting one. We'll have to discover where to draw that line in education and training.
It is an incredible accelerant in top-down 'theory driven' learning, which is objectively good, I think we can all agree. Like, it's a better world having that than not having it. But at the same time there's a tension between that and the sort of bottom-up practice-driven learning that's pretty inarguably required for mastery.
Perhaps the answer is as mundane as one must simply do both, and failing to do both will just result in... failure to learn properly. Kind of as it is today except today there's often no truly accessible / convenient top-down option at all therefore it's not a question anyone thinks about.
OP here, yeah, I think that's a really good point.
I feel like the way I'm building this in is a violent maintenance of two extremes.
On one hand, fully merged with AI and acting like we are one being, having it do tons of work for me.
And then on the other hand is like this analog gym where I'm stripped of all my augmentations and tools and connectivity, and I am being quizzed on how good I could do just by myself.
And based on how well I can do in the NAUG scenario, that's what determines what tweaks need to be made to regular AUG workflows to improve my NAUG performance.
Especially for those core identity things that I really care about. Like critical thinking, creating and countering arguments, identifying my own bias, etc.
I think as the tech gets better and better, we'll eventually have an assistant whose job is to make sure that our un-augmented performance is improving, vs. deteriorating. But until then, we have to find a way to work this into the system ourselves.
there could also be an almost chaos-monkey-like approach of cutting off the assistance at indeterminate intervals, so you've got to maintain a baseline of skill / muscle memory to be able to deal with this.
I'm not sure if people would subject themselves to this, but perhaps the market will just serve it to us as it currently does with internet and services sometimes going down :-)
I know for me when this happens, and also when I sometimes do a bit of offline coding in various situations, it feels good to exercise that skill of just writing code from scratch (erm, well, with intellisense) and kind of re-assert that I can do it now we're in tab-autocomplete land most of the time.
But I guess opting into such a scheme would be one-to-one with the type of self determined discipline required to learn anything in the first place anyway, so I could see it happening for those with at least equal motivation to learn X as exist today.
How I see it, LLMs aren't really much different than existing information sources. I can watch video tutorials and lectures all day, but if I don't sit down and practice applying what I see, very little of it will stick long term.
The biggest difference I see is, pre-LLM search, I spent a lot more time looking for a good source for what I was looking for, and I probably picked up some information along the way.
> We'll have to discover where to draw that line in education and training.
I'm not sure we (meaning society as a whole) are going to have enough say to really draw those lines. Individuals will have more of a choice going forward, just like they did when education was democratized via many other technologies. The most that society will probably have a say in is what folks are allowed to pay for as far as credentials go.
What I worry about most is that AI seems like it's going to make the already large have/not divide grow even more.
that's actually what I mean by we. As in, different individuals will try different strategies with it, and we the collective will discover what works based on results.
We want machines to do the laundry and clean our house so we have more time to create art and write code. Seems like in our current trajectory, the machines will produce the art and code so we have more time to clean our house and do laundry....
But maybe both of those are in the category of undesirable things.
And the things we end up with are like art and baking and walking and talking and drinking coffee and such.
Professional Chess is a nice pattern here. A calculator can beat Magnus Carlsen at this point, but Chess is more popular than ever. So it should be ok if AI/Robots are better than us at all the stuff we still decide to do.
Except Professional Chess, taken to mean players earning a living solely from paid tournament play, is in the low hundreds? Thousands? Meanwhile there are over 20 million 'professional' software developers. There are many things about that single number demographic that I would argue against, but despite that I'm not sure there's ever been a market for any kind of 'professional chess player', yet there is for 'professional software developer' (for some definitions of 'professional' and 'software').
The point that we should do the things we want to be, as in, to have as parts of our identity, is really good insight. Even if the AI can do X well, maybe I want to also be able to do X, therefore I should practice it.
I don't know what those things will be for me, yet, but it's good to have a more specific and directed way to think about which skills I want to keep.
When one thinks about human decision making, there are at least two classes of decisions:
1. decisions made with our "fast" minds: ducking out of the way of an incoming object, turning around when someone calls our name ... a whole host of decisions made without much if any conscious attention, and that if you asked the human who made those decisions you wouldn't get much useful information about.
2. decisions made with our "slow" minds: deciding which of 3 gifts to get for Aunt Mary, choosing to give a hug to our cousin, deciding to double the chile in the recipe we're cooking ... a whole host of decisions that require conscious reasoning, and if you asked the human who made those decisions you would get a reasonably coherent, explanatory logic chain.
When considering why an LLM "made the decisions that it did", it seems important to understand whether those decisions are closer to type 1 or type 2. If the LLM arrived at them the way we arrive at a type 1 decision, it is not clear that an explanation of why is of much value. If an LLM arrived at them the way we arrive at a type 2 decision, the explanation might be fairly interesting and valuable.
Does it really matter how the LLM got to a (correct) conclusion?
As long as the explanation is sound as well and I can follow it, I don't really care if the internal process looked quite different, as long as it's not outright deceptive.
- using ai effectively is itself a skill that needs training, especially if you're already good at the critical thinking stuff.
- using AI actually most of everything i do anyways, is critical thinking. I'm constantly reviewing the AI's code and finding little places where the AI tried to get away with a shortcut, or started to overarchitect a solution, or both.
Lurking between the lines in arguments about AI writing/code/art is that whether or not an activity is "gym" or "job" is often in the eye of the beholder.
People who never "went to the gym" in a field are all too eager to brush off the entire design space as pure Job that can and should be fully delegated to AI posthaste.
I think that comes down to documenting the mindset as a goal and then using all the AI, scaffolding, and tools available to that system to help you nurture that mindset.
This is a great framework for self-development, but I wonder if the Job vs. Gym analogy is a bit premature. There seems to be a level of Silicon Valley optimism here that assumes AI already surpasses human capability in these creative areas. From my perspective, AI only outperforms in areas where the human hasn't developed a real craft. Is it possible that the current hype is causing us to undervalue the unique quality of human-only output?
You're really rizzing up the whole "AI can do almost everything better than humans" point. Is there a chance that your investments are causing you to sensationalize things a bit? Because I can promise you, AI can only do better the things that I have absolutely no skill in.
If you need to believe that everyone's only in it for their investment portfolio, for you to sleep well at night, I mean, you do you, but recognize that that's a giant balloon of copium you're huffing.
It is an incredible accelerant in top-down 'theory driven' learning, which is objectively good, I think we can all agree. Like, it's a better world having that than not having it. But at the same time there's a tension between that and the sort of bottom-up practice-driven learning that's pretty inarguably required for mastery.
Perhaps the answer is as mundane as one must simply do both, and failing to do both will just result in... failure to learn properly. Kind of as it is today except today there's often no truly accessible / convenient top-down option at all therefore it's not a question anyone thinks about.
I feel like the way I'm building this in is a violent maintenance of two extremes.
On one hand, fully merged with AI and acting like we are one being, having it do tons of work for me.
And then on the other hand is like this analog gym where I'm stripped of all my augmentations and tools and connectivity, and I am being quizzed on how good I could do just by myself.
And based on how well I can do in the NAUG scenario, that's what determines what tweaks need to be made to regular AUG workflows to improve my NAUG performance.
Especially for those core identity things that I really care about. Like critical thinking, creating and countering arguments, identifying my own bias, etc.
I think as the tech gets better and better, we'll eventually have an assistant whose job is to make sure that our un-augmented performance is improving, vs. deteriorating. But until then, we have to find a way to work this into the system ourselves.
I'm not sure if people would subject themselves to this, but perhaps the market will just serve it to us as it currently does with internet and services sometimes going down :-)
I know for me when this happens, and also when I sometimes do a bit of offline coding in various situations, it feels good to exercise that skill of just writing code from scratch (erm, well, with intellisense) and kind of re-assert that I can do it now we're in tab-autocomplete land most of the time.
But I guess opting into such a scheme would be one-to-one with the type of self determined discipline required to learn anything in the first place anyway, so I could see it happening for those with at least equal motivation to learn X as exist today.
The biggest difference I see is, pre-LLM search, I spent a lot more time looking for a good source for what I was looking for, and I probably picked up some information along the way.
I'm not sure we (meaning society as a whole) are going to have enough say to really draw those lines. Individuals will have more of a choice going forward, just like they did when education was democratized via many other technologies. The most that society will probably have a say in is what folks are allowed to pay for as far as credentials go.
What I worry about most is that AI seems like it's going to make the already large have/not divide grow even more.
But maybe both of those are in the category of undesirable things.
And the things we end up with are like art and baking and walking and talking and drinking coffee and such.
Professional Chess is a nice pattern here. A calculator can beat Magnus Carlsen at this point, but Chess is more popular than ever. So it should be ok if AI/Robots are better than us at all the stuff we still decide to do.
I'd love for them to take my job as a programmer though, as that would certainty free up time for me to travel and drink coffee and Guinness.
[1] https://evansdata.com/reports/viewRelease.php?reportID=9
I don't know what those things will be for me, yet, but it's good to have a more specific and directed way to think about which skills I want to keep.
When one thinks about human decision making, there are at least two classes of decisions:
1. decisions made with our "fast" minds: ducking out of the way of an incoming object, turning around when someone calls our name ... a whole host of decisions made without much if any conscious attention, and that if you asked the human who made those decisions you wouldn't get much useful information about.
2. decisions made with our "slow" minds: deciding which of 3 gifts to get for Aunt Mary, choosing to give a hug to our cousin, deciding to double the chile in the recipe we're cooking ... a whole host of decisions that require conscious reasoning, and if you asked the human who made those decisions you would get a reasonably coherent, explanatory logic chain.
When considering why an LLM "made the decisions that it did", it seems important to understand whether those decisions are closer to type 1 or type 2. If the LLM arrived at them the way we arrive at a type 1 decision, it is not clear that an explanation of why is of much value. If an LLM arrived at them the way we arrive at a type 2 decision, the explanation might be fairly interesting and valuable.
As long as the explanation is sound as well and I can follow it, I don't really care if the internal process looked quite different, as long as it's not outright deceptive.
- using AI actually most of everything i do anyways, is critical thinking. I'm constantly reviewing the AI's code and finding little places where the AI tried to get away with a shortcut, or started to overarchitect a solution, or both.
People who never "went to the gym" in a field are all too eager to brush off the entire design space as pure Job that can and should be fully delegated to AI posthaste.