for each one – “…yet”
for each one – “…yet”
“Using deep feature analysis, we used pictures of authenticated Raphael paintings to train the computer to recognize his style to a very detailed degree, from the brushstrokes, the color palette, the shading and every aspect of the work,” mathematician and computer scientist Hassan Ugail from the University of Bradford in the UK explained in December, when the researchers’ findings were published.
"The computer sees far more deeply than the human eye, to microscopic level. "
Haha Google minddream or whatever from 10 years ago is akin to a stone spear when talking about weapons
checkmate
I dunno – I didn’t downvote you! :)
I think we can have compassion for people who commit suicide because of suffering and are young, but not think it’s a good or ideal thing. I think the protest here or euthanasia because you’re old/sick is a different thing.
For everyone (for me) condoning suicide too much – I always think about it like this:
We were dead for billions of years and we will be dead for billions of years. This little sub 100 year run we all have is just a flash in the pan and even if you’re not having a good time at all and think you can’t ever have a good time, surely just sticking it out for the novelty if nothing else is better in most cases of suffering than checking out too early. We’ll all be back there soon enough.
why do we teach everyone to only pull your gun if you’re going to kill someone unless you’re a police – then just pull it any time you’re worried or confused!
They did the one thing they were trained to do – Pull first ask questions for fire extinguishers later
shoot out the fire or scare the guy so much he stops being on fire – only options
That’s what I thought – It seems like there’s a group of people this bothers and aren’t interested in regardless of the outputs. It feels ideological, but I won’t go that far and claim that for you or them – Anyway, this isn’t made for you and it is OK that we feel differently.
Thanks for sharing your feedback and opinion again! Have a great day!
Do you have any articles or reading I can do on what those ‘thousand’ things would be? I can definitely build that into the model either with fine-tuning or connecting GPT to the internet.
I wholesale disagree things can’t be fixed and your logic there doesn’t really track. In general your manner reminds me of the famous Sartre quote. You don’t seem to really be interested in engaging in good faith. I find your failure to even attempt an answer at my question suggests your true motives.
If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.”
Have a great day and look out for the next update! I will incorporate your feedback into the changes.
I actually have an old project that illustrates books automatically – https://github.com/pwillia7/booksplitter
I haven’t looked at it in a while – good idea and I’ll try to rebuild that on this stack. Here’s an example output from that (this is pretty ‘dumb’ and is just generating prompts based on the text and has some style locking with prompts) https://docs.google.com/document/d/1IsnynQZoxOBmZx9Jac4DfWn15YCevG63CxsIkbu8tgE/edit
So, this is more than just running the image through an AI generator – It uses a few techniques to lock the image to the lines in the original image and the prompt is aware of the article data and the image via GPT vision, controlnet, and the internet.
Also, this does not replace the images on Wikipedia, it’s a chrome extension that let’s you toggle between original and generated images – The intent is if you see an old 1700s etching and wonder what it really looked like – Or see a poorly drawn Mughal era painting and wonder what the scene might have looked like in real life – The only real ‘funcitonal’ use I’ve seen building it is with coins and other things that are ‘worn down’ It does a pretty good job at making that stuff more visible – There’s a few coin examples in the post.
Can you look at the line drawing of the lighthouse of Alexandria and the AI generated image for me and tell me if there’s some level of fidelity improvement that could be present to make you feel differently? I struggle to find a lot of differences other than the color.
The ‘upscale’ button could just let us start with a higher resolution starting image with all details preserved – In the painting of the lighthouse, where the boy is removed, that kind of thing would get fixed and small characters would be much better preserved, at the cost of generation time – I’m not saying just upscale the AI image.
On the comment about fiction/fantasy – The majority of the images we’re modifying are not ‘primary sources’ in that Hermann Thiersch never saw the Lighthouse – This feels like the same level of fantasy since we’re using his original image with such high fidelity. I’m curious to get your thoughts.
Thanks for the feedback!
Oh, I see – I meant the 2nd lighthouse picture. The one that is much higher fidelity to the original image. I see what you’re saying about the first image
No need to be ugly I don’t think — Thanks for the feedback all the same.
I honestly did not notice it because my focus was on the lighthouse structure itself and that level of fidelity wasn’t something I personally needed as an addendum to the original image (not replacement)
So, I could get tiny subject fidelity up by using larger images from the start, but that would dramatically increase generation time.
Right now an image runs through GPT AND generates in about 10-15 seconds. If we pretend you were using this, what the max time you’d want to wait for the highest quality image?
If I added an upscale button to the AI image that produced a 4X much higher detail/fidelity image but took longer, would that be a solution?
Thanks for the feedback!
I’m going to lock “I’m old greegggggggg” into the generation prompt
Yes – I must never use the word ‘improve’ again that is clear haha – Do you like ‘modernize’ ‘update’ ? which words are least upsetting?
Somewhere else someone gave me the idea to build different fine tune models that are more aware of styles and techniques from different periods. Thanks for the great feedback I appreciate you!
I wanted to not cherry pick examples and so I just did 15 images and posted them to see how the ‘anger’ reaction has changed since last time.
The drawing is the original and the painting is the AI image :)
On that image too, I can’t even see any difference in the lines at all – which differences are most obvious and upsetting to you?
Thanks for the feedback!
I will say I can feel the hype train with Manor Lords, which I usually am not a part of. I like that kind of game and already had furthest frontier so I picked it up.
I was pretty… shocked with how much was unfinished and how little soul and love the game felt like it had.
I figured I got duped and someone paid every youtuber on a slow week to hype it up since they missed some publisher deadline or whatever