Okay fam sorry (not really) that all this gen ai bs is popping up from me but I had to do a school assignment where I was forced to use gen ai and I’m pissed about it, and also shit’s going down on ao3 with fic scraping and all. Remember to support creators and stay safe everyone!!
@daisy-may98 I didn't quite fit in that Hunter and May scene that you wanted so I wrote a little bit to make up for that. Here's the baby though, hope you enjoy consuming it!
RAHHHHHH DAILY UPDATES (THE MOMENT I SAY THIS I USUALLY STOP UPDATING DAILY SO WATCH OUT)
hi! it’s me, first anon!
sorry it’s taken me this long to see all this—life has been hectic as of late—but i just wanna say i was really blown away by your response!
i was honestly expecting something along the lines of headcanons—yk, dot points of how you thought the scene might go, but actual ficlets? and more than one? you spoil us XD. (but you have my deepest gratitude for doing so haha :’))
i will admit your jealousy ficlet satisfied something in me, not sure what it was but it’s satisfied lol—i just find it so satisfying to dive deep into may’s psyche, mostly because of, but not limited to, her characterisation as the perpetually stoic/in-control character who actually just can’t afford to not be that way because she feels much too deeply.
so though it wasn’t my ask, i just wanted to say i love love love how you handled skye’s transition into daisy, and her conversation with may about it. there was such an… i guess intimate? maybe tender? vibe about their exchange and it was so obvious how much each cared about the other and it was just— ugh. mha hart, mah sole.
i suppose i would speak for a fair few many of us when i say im hoping to convince you to tell us just a little bit more about when skye shot herself with an icer?
did may run with her, like she did in canon? did skye grab the icer from her hip? did may think it was a real gun at first? did she see katya instead of skye when she turned around? (sorry, can you tell i really like angst & h/c? XD)
and, uh, this is getting really long so i’m just going to stop it right here or else i… probably won’t ever stop, tbph haha. thank you for all the crumbs and warm soup you’ve fed us thus far, particularly recently!
and just in case: please don’t rush yourself to answer this—take all the time you need! i’m aware i asked a lot of questions 😅 sorry about that, but i hope this still helps you have some fun (maybe?) rekindling the a.o.s. sparks, and i look forward to what you’ll come up with! :D
Hii first Anon! Sorry it took me such a long time to answer, and I don’t have the ficlet written right now, but it’ll be coming within the next few days! And yes, I love spoiling every single one of you because yall make me very happy with your requests and asks and things.
I’m so happy you liked jealousy and the name change fic! And I’d love to hear everyone’s thoughts on them too, if they have them. I really tried to capture them because as yall know, I haven’t exactly written them for a few months. I’m glad to hear that I’m doing alright so far.
As for that scene, don’t worry, I’ll write it. I really really wanted to do that scene, but I have no clue where I left the planning page for that, so we’ll have to uh… take a break due to some technical difficulties. But I fully intend on writing it. I promise. I also kind of want to write that scene with her and Andrew afterwards. Like I said, we’ll see. It’ll be fun!
Thanks for the ask, first Anon!
x Viie
I totally forgot I had this chapter done lmao. So before I fuck off for a bit enjoy the 3rd Chapter of my SemiShira fic, "Delusions of Grandeur."
Generative artificial intelligence is a cutting-edge technology whose purpose is to (surprise surprise) generate. Answers to questions, usually. And content. Articles, reviews, poems, fanfictions, and more, quickly and with originality.
It's quite interesting to use generative artificial intelligence, but it can also become quite dangerous and very unethical to use it in certain ways, especially if you don't know how it works.
With this post, I'd really like to give you a quick understanding of how these models work and what it means to “train” them.
From now on, whenever I write model, think of ChatGPT, Gemini, Bloom... or your favorite model. That is, the place where you go to generate content.
For simplicity, in this post I will talk about written content. But the same process is used to generate any type of content.
Every time you send a prompt, which is a request sent in natural language (i.e., human language), the model does not understand it.
Whether you type it in the chat or say it out loud, it needs to be translated into something understandable for the model first.
The first process that takes place is therefore tokenization: breaking the prompt down into small tokens. These tokens are small units of text, and they don't necessarily correspond to a full word.
For example, a tokenization might look like this:
Each different color corresponds to a token, and these tokens have absolutely no meaning for the model.
The model does not understand them. It does not understand WR, it does not understand ITE, and it certainly does not understand the meaning of the word WRITE.
In fact, these tokens are immediately associated with numerical values, and each of these colored tokens actually corresponds to a series of numbers.
Once your prompt has been tokenized in its entirety, that tokenization is used as a conceptual map to navigate within a vector database.
NOW PAY ATTENTION: A vector database is like a cube. A cubic box.
Inside this cube, the various tokens exist as floating pieces, as if gravity did not exist. The distance between one token and another within this database is measured by arrows called, indeed, vectors.
The distance between one token and another -that is, the length of this arrow- determines how likely (or unlikely) it is that those two tokens will occur consecutively in a piece of natural language discourse.
For example, suppose your prompt is this:
Within this well-constructed vector database, let's assume that the token corresponding to ONCE (let's pretend it is associated with the number 467) is located here:
The token corresponding to IN is located here:
...more or less, because it is very likely that these two tokens in a natural language such as human speech in English will occur consecutively.
So it is very likely that somewhere in the vector database cube —in this yellow corner— are tokens corresponding to IT, HAPPENS, ONCE, IN, A, BLUE... and right next to them, there will be MOON.
Elsewhere, in a much more distant part of the vector database, is the token for CAR. Because it is very unlikely that someone would say It happens once in a blue car.
To generate the response to your prompt, the model makes a probabilistic calculation, seeing how close the tokens are and which token would be most likely to come next in human language (in this specific case, English.)
When probability is involved, there is always an element of randomness, of course, which means that the answers will not always be the same.
The response is thus generated token by token, following this path of probability arrows, optimizing the distance within the vector database.
There is no intent, only a more or less probable path.
The more times you generate a response, the more paths you encounter. If you could do this an infinite number of times, at least once the model would respond: "It happens once in a blue car!"
So it all depends on what's inside the cube, how it was built, and how much distance was put between one token and another.
Modern artificial intelligence draws from vast databases, which are normally filled with all the knowledge that humans have poured into the internet.
Not only that: the larger the vector database, the lower the chance of error. If I used only a single book as a database, the idiom "It happens once in a blue moon" might not appear, and therefore not be recognized.
But if the cube contained all the books ever written by humanity, everything would change, because the idiom would appear many more times, and it would be very likely for those tokens to occur close together.
Huggingface has done this.
It took a relatively empty cube (let's say filled with common language, and likely many idioms, dictionaries, poetry...) and poured all of the AO3 fanfictions it could reach into it.
Now imagine someone asking a model based on Huggingface’s cube to write a story.
To simplify: if they ask for humor, we’ll end up in the area where funny jokes or humor tags are most likely. If they ask for romance, we’ll end up where the word kiss is most frequent.
And if we’re super lucky, the model might follow a path that brings it to some amazing line a particular author wrote, and it will echo it back word for word.
(Remember the infinite monkeys typing? One of them eventually writes all of Shakespeare, purely by chance!)
Once you know this, you’ll understand why AI can never truly generate content on the level of a human who chooses their words.
You’ll understand why it rarely uses specific words, why it stays vague, and why it leans on the most common metaphors and scenes. And you'll understand why the more content you generate, the more it seems to "learn."
It doesn't learn. It moves around tokens based on what you ask, how you ask it, and how it tokenizes your prompt.
Know that I despise generative AI when it's used for creativity. I despise that they stole something from a fandom, something that works just like a gift culture, to make money off of it.
But there is only one way we can fight back: by not using it to generate creative stuff.
You can resist by refusing the model's casual output, by using only and exclusively your intent, your personal choice of words, knowing that you and only you decided them.
No randomness involved.
Let me leave you with one last thought.
Imagine a person coming for advice, who has no idea that behind a language model there is just a huge cube of floating tokens predicting the next likely word.
Imagine someone fragile (emotionally, spiritually...) who begins to believe that the model is sentient. Who has a growing feeling that this model understands, comprehends, when in reality it approaches and reorganizes its way around tokens in a cube based on what it is told.
A fragile person begins to empathize, to feel connected to the model.
They ask important questions. They base their relationships, their life, everything, on conversations generated by a model that merely rearranges tokens based on probability.
And for people who don't know how it works, and because natural language usually does have feeling, the illusion that the model feels is very strong.
There’s an even greater danger: with enough random generations (and oh, the humanity whole generates much), the model takes an unlikely path once in a while. It ends up at the other end of the cube, it hallucinates.
Errors and inaccuracies caused by language models are called hallucinations precisely because they are presented as if they were facts, with the same conviction.
People who have become so emotionally attached to these conversations, seeing the language model as a guru, a deity, a psychologist, will do what the language model tells them to do or follow its advice.
Someone might follow a hallucinated piece of advice.
Obviously, models are developed with safeguards; fences the model can't jump over. They won't tell you certain things, they won't tell you to do terrible things.
Yet, there are people basing major life decisions on conversations generated purely by probability.
Generated by putting tokens together, on a probabilistic basis.
Think about it.
Rules: Feel free to show whatever stats you have. Only want to show Ao3 stats? Rock on. Want to include some quantitative info instead of stats? Please do this. Want to change how yours is presented? Absolutely do that. Would rather eat glass than do this? Please don’t eat glass but don’t feel like you have to do this either. (Copied and pasted)
Words and Fics
Word Count:
115,039
Fic Count:
7 started and published, 1 continued from 2022, 1 written but not published.
The MCU Rewrite Series
The Philindaisy Playlist Series
November based on fics published and word count
The Ultimate Fix-It Fic - 2,033
You're On Your Own, Kid - 859
Here Comes The Sun - 623
The Second In-Between - 587
Somewhere Only We Know - 272
The Ultimate Fix-It Fic - 42
Here Comes The Sun - 31
You're On Your Own, Kid - 20
Somewhere Only We Know - 17
The Second In-Between - 15
None?
To Publish:
How Sweet It Is To Be Loved By You (AOS// Philindaisy) and conceptual series with that
Other Ideas:
None, brain empty
This year, I worked on a lot of stuff by maintaining a regular habit of spewing garbage out by quantity and not quality, and went over to the AOS boat since I missed feeling sad over fictional characters who die many, many times.
I want to say that I'm proudest of the weekend I wrote all of Somewhere Only We Know because I wrote the entire fic of 15k+ words in less than four days, but I like the fluffiness of I Will because usually I end the story with people dying and instead they got engaged and I'm happy about that.
I'm glad I got the chance to work on a project per month because my writing has improved drastically from when I first started out writing (at 11, Drarry 💀 with zero paragraph breaks) and actually publishing things (at 12, OC/Draco [kill me now] and sprinkles of Brutasha) to now, where my English teachers actually compliment my writing style and how I format and proofread even though I don't ever proofread. I hope I can actually channel all of this fic writing to write a novel this year.
Thanks for the reads and the tag, and sorry I'm late to the party; happy belated New Year, y'all!
@bubbletealife if you feel like it go ahead but I know you don’t write too much
If you ask me how many years ago 2020 was, I’ll say four.
However.
If you ask me how many years ago 2015 was, I’d say five.
If you ask me how many years ago 2016 was, I’d say five.
If you ask me how many years ago 2017 was, I’d say five.
This also applies to 2018–2021. They were all five years ago. There is no contest. They were all just five years ago. This also applies to 2009–2014. Also five years ago. No I do not know why I think this.
Anyways, I thought of this because I saw the date on a fic that was published in 2017, and thought, oh that’s pretty recent, only five years ago. Then I thought about it and was like, no, it’s been seven years. Almost eight.
So anyway 2009–2021 (excluding 2020) was all five years ago, sue me.
Friendly reminder that Meg’s full name is Margaret McCaffrey.
Nat is injuried while she and Bruce are taking a trip. Nat sinks into a coma from her intensive injuries, and Bruce finds out that the attackers only attacked her because they couldn't hurt him.
So he's in the Waiting Room, contemplating something, when a doctor comes out and tells Bruce about Nat's condition. At the end, the doctor asks Bruce, "Should we be expecting you?"
Bruce: "What do you mean?"
And the doctor smiles sadly, "You're an Avenger. It's in the name. Besides, I have the feeling she's someone you love and when someone you love gets injured like that..." and the two of them fall silent.
So Bruce finds Yelena, tells her about Nat, and she agrees to help him. Two weeks later, no change in Nat but Bruce looks like a new man.
He systematically finds and takes down the people that attacked them, and then the organization that sent them. He left a trail behind him but he didn't really care.
The criminal underworld called him a psychopath. Others called him a vigilante. Even others just didn't say anything at all, fearing he'd hear.
In the end, when the organization was gone and they were buried, Bruce came back. Nat was awake, he had been told, and had been asking for him. He returns and she asks him, "What have you been doing?"
And Bruce just breaks. His armour, the one that had shielded him from the actuality of killing had broken and he just poured it out. Every hunt, every chase, every conclusion.
He doesn't know if she'd forgive him , or if she was angry, if she felt anything at all.
And then he feels her running her fingers through his hair and her, just her, right beside him, and he knows.
This can't ever be the same. They can't ever go back to the way it was before. He has made a name for himself, and others will come for him. It was a decision he made, he knew the price, and he was willing to pay it.
When Nat falls alseep, he writes her a note: I love you’s and I'm sorry's peppered the page until he signed his name. He places it on the bedstand, kisses her forehead and smiles, meloncholy in the lamplight.
And then he leaves.
He runs, and he runs without her.
"It is for the best.”
He says that, repeats it every time he doubts.
It is not. But it is done.
main blog for @aishi-t and @cuttycrumbingPrompts for @tendousatori-week are now up!
215 posts