With modern CPU’s supposedly shipping with ‘AI cores’: How long do you think it will take for a proper opensource, privacy respecting productivity tools(Something like whatever M$ copilot is supposed to be?) to be available?

Personally, i would love to see something like ‘Passive’ OCR integrated with the display server: the ability to pause any video and just select whatever text(even handwritten) there is naturally like it was a text document without any additional hassle will be really useful
Also useful in circumventing any blocks certain websites put on articles to prevent text from being copied

Or an AI grammar checker running natively for LibreOffice.

What are some AI tools you think should be developed for desktop Linux?

  • thevoidzero@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    5 months ago

    Not for handwritten text, but for printed fonts, getting OCR is as easy as just making a box in screen with current technology. So I don’t think we need AI things for that.

    Personally I use tesseract. I have a simple bash script that when run let’s me select a rectangle in screen, save that image and run OCR in a temp folder and copy that text to clipboard. Done.

    Edit: for extra flavor you can also use notify-send to send that text over a notification so you know what the OCR produced without having to paste it.

    • ferret@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      Share the script! Share the script! Share the script! (Nobody will judge you if it is written strangely or is hard to adapt, reading other people’s code is always fun (bash scripts are code fight me))

      • thevoidzero@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        Hi there, I did say it’s easily doable, but I didn’t have a script because I run things based on the image before OCR manually (like the negating the dark mode I tried in this script; when doing manually it’s just one command as I know whether it’s dark mode of not myself; similar for the threshold as well).

        But here’s a one I made for you:

        #!/usr/bin/env bash
        
        # imagemagic has a cute little command for importing screen into a file
        import -colorspace gray /tmp/screenshot.png
        mogrify /tmp/screenshot.png -color-threshold "100-200"
        # extra magic to invert if the average pixel is dark
        details=`convert /tmp/screenshot.png -resize 1x1 txt:-`
        total=`echo $details | awk -F, '{print $4}'`
        value=`echo $details | awk '{print $7}'`
        darkness=$(( ${value#_(%_)} * 100 / $total ))
        if (( $darkness < 50 )); then
           mogrify -negate /tmp/screenshot.png
        fi
        
        # now run the OCR
        text=`tesseract /tmp/screenshot.png -`
        echo $text | xclip -selection c
        notify-send OCR-Screen "$text"
        

        So the middle part is to accommodate images in dark mode. It negates it based on the threshold that you can change. Without that, you can just have import for screen capture, tesseract for running OCR. and optionally pipe it to xclip for clipboard or notify-send for notification.

        In my use case, I have keybind to take a screenshot like this: import png:- | xclip -selection c -t image/png which gives me the cursor to select part of the screen and copies that to clipboard. I can save that as an image (through another bash script), or paste it directly to messenger applications. And when I need to do OCR, I just run tesseract in the terminal and copy the text from there.