Does Unicode just mean UTF-8?
No, but ...
If you're even asking the question, you can pretend they're the same, and that will probably be good enough for now. (This is even more true in 2024 than it was in 2009.)
If you're going to be writing code, then you should treat Unicode as an opaque handle (pointer). Unicode just means text, and the actual physical encoding -- even how much memory you need -- is magic that you should never ever see.
Because if you do see it, you will be tempted to take short cuts, and they will seem to work, and the data corruption will be subtle enough that you won't catch it until it is too late to fix properly.
Asking about UTF-8 (or other physical encodings) is like asking which pixels are used to display an a. Even if you get the answer right, it will fail when someone zooms in, or changes the font.
That said, if you're sending text to another system, you do need to agree on the message format, and UTF-8 is probably the best default. Treat it as a magic token and just plug it in as a constant wherever the API asks for an encoding but you can't just pass through whatever magic name your own caller gave you. As a general rule, unless the document (or at least the code) is older than your computer, it will default to UTF-8, or at least a (possibly mislabeled) close variant.
Personally, I really want to know how the stuff is represented in memory. It is hard to trust an API if I can't see what it is doing. If you also suffer from this problem, then the next step is to read the code, looking for terms like "encoding", "encoder", "decoder", "codec", "character", and "charset". ("char" will produce a lot of false alarms.)
If you want to know how to do it right, perhaps in a new system, then ... well, you're sort of out of luck. The first approximation is to treat "Unicode" as dark magic, and to use UTF-8 as the encoding for talking to the rest of the world. The next level is to dig in to the actual Unicode specification. Do not be tempted by shorter/simpler explanations of UTF-8; that way lies subtle data corruption.
And how do you find the Unicode specification? Uh ... that also turns out to be surprisingly complicated, but your difficulties will be good preparation for the standard itself. Some parts will seem bizarrely complicated. Some parts will seem insane. Some parts will actually be insane, because of logically inconsistent requirements. But as a start at finding the standard, https://www.unicode.org/versions/Unicode15.1.0/ provides the now-current standard. Except that you really do have to look at the various annexes and technical reports and such. https://www.unicode.org/reports/
Given your interest in UTF-8, you're probably interested in Section 3.9, Unicode Encoding Forms, which you can find from the table of contents by happening to know that it is part of Section 3, Conformance. Or maybe by going to the superseded Unicode Standard Annex 19 (UAX #19, tr19) and noting what it was superseded by. Come to think of it, you might just want to look up UTF-8, UTF-16, and UTF-32 elsewhere to get a general understanding first. But remember not to stop with those simpler explanations -- they do tend to simplify things in a way that permits (usually) subtle data corruption.
If you're wondering why this is so complicated, you might want to read about collation (trying to alphabetize "normally" is one of those logically impossible issues, particularly across languages) and canonicalization (because of course there are multiple canonical forms) and legacy charsets/character sets/encodings. Or take a peek at how to tell when you can add a line break, or how to tell which direction the writing goes, or look up the Turkish I, or ...
And if you needed to get some work done soon, go back to "Unicode is just an opaque handle (pointer), and I don't touch the physical layout directly."