Yup, I drive a Toyota Prius and am looking at Nissan Leafs. My wife and I hate all the smart crap in cars, and it’s pretty much everywhere now…
Mama told me not to come.
She said, that ain’t the way to have fun.
Yup, I drive a Toyota Prius and am looking at Nissan Leafs. My wife and I hate all the smart crap in cars, and it’s pretty much everywhere now…
Yes, it kind of is. A search engine just looks for keywords and links, and that’s all it retains after crawling a site. It’s not producing any derivative works, it’s merely looking up an index of keywords to find matches.
An LLM can essentially reproduce a work, and the whole point is to generate derivative works. So by its very nature, it runs into copyright issues. Whether a particular generated result violates copyright depends on the license of the works it’s based on and how much of those works it uses. So it’s complicated, but there’s very much a copyright argument there.
Yup, you’re talking to a delivery service, so unless you also work there, there’s really no other reasonable interpretation than not being at the delivery address at the delivery time.
What we are talking about is the act of reading and/or learning and then using that information in order to synthesize new material.
Sure, but that’s not what LLMs are doing. They’re breaking down works to reproduce portions of it in answers. Learning is about concepts, LLMs don’t understand concepts, they just compare inputs with training data to provide synthesized answers.
The process a human goes through is distinctly different from the process current AI goes through. The process an AI goes through is closer to a journalist copy-pasting quotations into their article, which falls under fair use. The difference is that AI will synthesize quotations from multiple (many) sources, whereas a journalist will generally just do one at a time, but it’s still the same process.
And this is why I hate those laws that are intended to protect kids. Yeah, it would be nice if kids couldn’t see stuff they shouldn’t, but it’s even better if my PII isn’t stolen. I’d rather my kids accidentally see porn once in a while than for their identity to be stolen.
By quality I meant resolution, I don’t need 4k, but I do need specific shows my wife and kids like.
I have a NAS set up with some movies and whatnot, so I’ve talked to my wife about setting up a budget to purchase content we want and then cancelling our streaming services. So we’d be limited to what’s available on DVD/Blu-Ray, but most of what my wife and kids watch are still available there.
The cost isn’t the issue, I really hate ads and I’m worried ad-free tiers will go away (or become unreasonably expensive).
I disagree that it needs to be explicit. The current law is the fair use doctrine, which generally has more to do with the intended use than specific amounts of the text/media. The point is that humans should know where that limit is and when they’ve crossed it, with motive being a huge part of it.
I think machines and algorithms should have to abide by a much narrower understanding of “fair use” because they don’t have motive or the ability to Intuit when they’ve crossed the line. So scraping copyrighted works to produce an LLM should probably generally be illegal, imo.
That said, our current copyright system is busted and desperately needs reform. We should be limiting copyright to 14 years (as in the original copyright act of 1790), with an option to explicitly extend for another 14 years. That way LLMs can scrape comment published >28 years ago with no concerns, and most content produced >14 years (esp. forums and social media where copyright extension is incredibly unlikely). That would be reasonable IMO and sidestep most of the issues people have with LLMs.
On the deep discounts page is The Hex. I haven’t played it, but it’s by the same dev as Pony Island and Inscryption, both of which I thoroughly enjoyed, so I’m going to get it. They’re all on a great sale right now.
Let me know if it works and I’ll follow. I don’t need quality, I just need something for my kids to watch occasionally.
Consider PiHole as a whole home network first line of defense.
I didn’t have any respect for him before, and now I guess I have disrespect.
And that’s exactly what that page discusses. It links three options you can try:
The first two are paid, the last is FOSS, and it claims each can mount Backblaze B2 as a Windows drive. I haven’t tried any of them, so YMMV.
That depends on how similar your resulting algorithm is to the sources you were “inspired” by. You’re probably fine if you’re not copying verbatim and your code just ends up looking similar because that’s how solutions are generally structured, but there absolutely are limits there.
If you’re trying to rewrite something into another license, you’ll need to be a lot more careful.
I complain all the time. But that’s not the subject of this post…
IP Man. Great movies.
I’m not going to be monitoring Chinese code projects. They don’t seem to care much about copyright, so they’ll probably just yoink the code into proprietary projects and not care about the licenses.
What am I going to do, sue someone in China? And decompile everything that comes from China to check if my code was likely in it? That’s ridiculous. If it’s domestic, I probably have a chance, but not if it’s in another country, and especially not one like China that doesn’t seem to care about copyright.
Here are options for to mount Backblaze B2 as a drive. It’s $6/TB/month, and I think they allow <1TB, so for 300GB you’d pay ~$2/month. So I think they’re pretty competitive, but I’m not familiar with Google Drive’s terms. They’re certainly in the same ballpark, if not cheaper, but it depends on your egress and Google Drive’s policies around that (how much you download from their service).
I’m pretty sure I do understand the issue. Here are some facts (and an article to back it up):
And here’s my interpretation/guesses:
So:
Using socketed RAM won’t fix performance issues related to running out of RAM, that issue is the same regardless. Only adding RAM will fix those performance issues, and Apple could just as easily make “special” RAM so you can’t buy socketed RAM on the regular market anyway (e.g. they’d need a different memory standard anyway due to Unified Memory).
I have hated Apple’s memory pricing for decades now, it has always been way more expensive to add RAM to an Apple device at order time vs PC competitors (I still add my own RAM to laptops, but it’s usually way cheaper through HP, Lenovo, etc than Apple at build-time). I’m not defending them here, I’m merely saying that the decision to use soldered RAM makes a lot of engineering sense, especially with the new Unified Memory architecture they’re using in the M-series devices.
The built-in Digital Wellbeing & Parental controls works. I have it on my Android 11 device, haven’t tested on anything newer (it’s not on my Graphene OS device based on the most recent Android though).
Settings > Digital Wellbeing & Parental controls > Dashboard > click the timer icon next to an app and set a limit
If you want something outside of the Google ecosystem (e.g. you’re running GrapheneOS), the following should work (untested):
There are probably others, that was just a cursory check.
That depends, do you copy verbatim? Or do you process and understand concepts, and then create new works based on that understanding? If you copy verbatim, that’s plagiarism and you’re a thief. If you create your own answer, it’s not.
Current AI doesn’t actually “understand” anything, and “learning” is just grabbing input data. If you ask it a question, it’s not understanding anything, it just matches search terms to the part of the training data that matches, and regurgitates a mix of it, and usually omits the sources. That’s it.
It’s a tricky line in journalism since so much of it is borrowed, and it’s likewise tricky w/ AI, but the main difference IMO is attribution, good journalists cite sources, AI rarely does.