Skip to main content
r/MachineLearning icon

r/MachineLearning

members
online

[D] Paperswithcode has been compromised [D] Paperswithcode has been compromised
News

I was randomly looking at the papers on CIFAR when I opened the website to see an aggregated list and saw that all the text had been replaced with spam text.

I have archived the URLs for a bunch of the datasets for reference:

https://archive.is/2Si8H

https://archive.is/KJCx1

https://archive.is/ZDBL5

https://archive.is/BHVsk

https://archive.is/b9xUp

https://archive.md/8BLVA

https://archive.md/SmoCt

https://archive.md/5UZLu

edit: added more examples


M&M'S. It's More Fun Together.
media poster


[D] Machine Learning, like many other popular field, has so many pseudo science people on social media [D] Machine Learning, like many other popular field, has so many pseudo science people on social media
Discussion

I have noticed a lot of people on Reddit people only learn pseudo science about AI from social media and is telling people how AI works in so many imaginary ways. Like they are using some words from fiction or myth and trying to explain these AI in weird ways and look down at actual AI researchers that doesn't worship their believers. And they keep using big words that aren't actually correct or even used in ML/AI community but just because it sounds cool.

And when you point out to them they instantly got insane and trying to say you are closed minded.

Has anyone else noticed this trend? Where do you think this misinformation mainly comes from, and is there any effective way to push back against it?


[R] Is it true that most of AI is just data cleaning and not fancy models? [R] Is it true that most of AI is just data cleaning and not fancy models?
Discussion

I’ve been reading about how in real-world AI, most of the work isn’t the cool stuff like neural nets, but actually just getting the data usable. Things like cleaning missing values, feature engineering, and framing the problem right.

Some people also said prompt engineering is the “new programming,” especially with LLMs becoming so dominant.

I came across a blog that listed 10 things you only realize after starting with AI — like how feedback loops can mess up your model after deployment, or how important it is to define your objective before even touching code.
It kinda shifted my view on what matters early on.

Is this the general consensus? Or is it still more about algorithms in practice?