hate it when I'm doing things I really want to do and I encounter the tiniest speedbump in the process and my brain goes "I shan't!" and fucks off for a month at a time to avoid those speedbumps.
ADHD pro tip: Use psychological warfare on yourself.
For example, in order to do long tasks, like folding laundry, I put on the Mario Hat:
The main feature of the Mario hat is that my headset does not fit over it, so when The Bees™ try to put me back in front of the screen, the headset issue forces me to remember why I put the Mario hat on, and back to the task I go
As a bonus, the Mario hat is also a very clear indicator to my housemates that business is getting done, and they have learned not to distract me when I'm wearing the "goofy-ass cosplay hat"
It's not stupid if it works.
Wait, so you said that you can learn to trust others by building friendships, but how does one go about doing that? Wouldn't someone I don't know be creeped out or annoyed if I suddenly walked up and started talking to them?
Friendships are built of repeated low-stakes interactions and returned bids for attention with slowly increasing intimacy over time.
It takes a long time to make friends as an adult. People will probably think you're weird if you just walk up and start talking to them as though you are already their friend (people think it's weird when I do this, I try not to do this) but people won't think it's weird if you're someone they've seen a few times who says "hey" and then gradually has more conversations (consisting of more words) with them.
I cheat at forming adult friendships by joining groups where people meet regularly. If you're part of a radio club that meets once a week and you just join up to talk about radios, eventually those will be your radio friends.
If there's a hiking meetup near you and you go regularly, you will eventually have hiking friends.
Deeper friendships are formed with people from those kinds of groups when you do things with them outside of the context of the original interaction; if you go camping with your radio friend, that person is probably more friend than acquaintance. If you go to the movies with a hiking friend who likes the same horror movies as you do, that is deepening the friendship.
In, like 2011 Large Bastard decided he wanted more friends to do stuff with so he started a local radio meetup. These people started as strangers who shared an interest. Now they are people who give each other rides after surgery and help each other move and have started businesses together and have gone on many radio-based camping trips and have worked on each other's cars.
Finding a meetup or starting a meetup is genuinely the cheat-code for making friends.
This is also how making friendships at schools works - you're around a group of people very regularly and eventually you get to know them better and you start figuring out who you get along with and you start spending more time with those people.
If you want to do this in the most fast and dramatic way possible, join a band.
In 2020 I wrote something of a primer on how to turn low-stakes interactions with neighbors and acquaintances into more meaningful relationships; check the notes of this post over the next couple days, I'll dig up the link and share it in a reblog.
The problem here isn’t that large language models hallucinate, lie, or misrepresent the world in some way. It’s that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text. So when they are provided with a database of some sort, they use this, in one way or another, to make their responses more convincing. But they are not in any real way attempting to convey or transmit the information in the database. As Chirag Shah and Emily Bender put it: “Nothing in the design of language models (whose training task is to predict words given context) is actually designed to handle arithmetic, temporal reasoning, etc. To the extent that they sometimes get the right answer to such questions is only because they happened to synthesize relevant strings out of what was in their training data. No reasoning is involved […] Similarly, language models are prone to making stuff up […] because they are not designed to express some underlying set of information in natural language; they are only manipulating the form of language” (Shah & Bender, 2022). These models aren’t designed to transmit information, so we shouldn’t be too surprised when their assertions turn out to be false.
ChatGPT is bullshit
good read for teachers.
I’m a young-adult woman with the hopes of becoming a well-known writer. I’m a dreamer, a music lover and a chaotic human being, curious about what the future will bring but without any idea of what to do with it. As for this tumblr, we’ll see. I will make an attempt to make an interesting place but for now I still have to figure out what to do with it.
167 posts