Following a project allows you to receive updates via email. It also lets the owner know that you like them.
×A paint app for my iPad where I can micromanage the UX
I haven't been good at writing blog posts for the app; mostly updates have been on Discord. I've put a huge amount of work into the app though, giving it an "infinite" (well, large - 131,072 x 131,072) canvas which is fairly rare for a rasterized app. But there are still bugs and barely any features compared to the competition.
Anyway, here's a demo video showing off the UI and existing features a bit: https://www.youtube.com/watch?v=bpOD4rEcoW8
And a few screenshots:
Here's a recent shot of Femto showing the debug log (helpful for debugging on iPad without doing a very slow development build). I also spent quite a while making custom sliders using Unity's UIToolkit. The built-in controls are all designed for Unity editor utilities and aren't runtime-ready, in my opinion. But also, I wanted some very specific UX like being able to nudge the sliders very tiny amounts without the handles snapping to the new cursor position, and having the slider move in smaller increments when you pull the cursor away. In an art app, you sometimes have to change things by tiny amounts, like nudging a color just a bit or adjusting the flow slightly. It took a while to get it right but I'm happy with it now.
The above also shows an early attempt at adding pencil-like grain texture. I spent a little while on that too, since a good pencil brush is important - I like the pencil grain, and I often tilt the brush to draw larger shadow areas just like a real pencil. I drew with some traditional pencils to get a reference.
Here's a close-up of the 4B, and the results I currently have in Femto.
There's still some things missing, like how a real pencil has a harder, darker edge near the point, and I'm still using a round brush shape which doesn't look quite right either, but overall I'm pretty happy with it and starting to do more practice in Femto. Here's some quick sketches I did from a Loomis book I'm reading now.
Undo/redo history was tough and made the code more complicated. It works, but it's a bit slow still so I'll need to revisit it in the future. I need to profile it since I currently don't have any good ideas on how to make it faster. The last few days have been satisfying since there's a bunch of low-hanging fruit that I'm able to knock off quickly, like some simple brush settings and different brush textures. But I'm trying to resist too much feature creep.
With the recent addition of a color picker, I'd say Femto is finally good enough to use for my practice, and I've been using it more casually which is great. Of course, there's a million things I still want to do, and another annoying crash on iPad that got introduced recently. I'm kind of proud of what's there already though! I feel like one of those people who wants to make pizza completely from scratch including growing all the ingredients...except for with art, and pixels.
I added a persistent data store to the app in the form of an embedded SQLite database. I think it makes sense to use SQL for anything beyond basic settings. I need to store all the user's projects and settings, image layers, brush settings, etc, etc. SQL makes it easy to do things like update all the objects when I make changes to the schema, sort by date or update time, and query for things arbitrarily. Why reinvent the wheel with some weird custom serialization, I thought?
It was pretty hard to get a SQLite db running, though. There are a few different options, but I kept running into problems like missing interop DLLs and confusion about which platforms they would run on, or really old, mysterious DLLs from random places in GitHub. I ended up installing NuGet for Unity and using sqlite-net because it was just easy to install and worked, unlike the other solutions.
sqlite-net is designed around being an ORM, which I didn't really want, but so far it has worked out well. I did have to hack in support for UniRx reactive properties, but that was pretty easy.
Beyond that, I added continuous saving so whenever you draw, your changes get saved to PNGs on the disk. Then you can reload your project from the gallery. It feels almost like a real app now! Unfortunately, I got Unityed again, because Unity apparently doesn't have any way to read and write 16-bit textures at runtime? I could only get EncodeToPNG() to write 8 bits per channel. EncodeToEXR() can give you a 16-bit lossless file, but LoadImage() won't read it.
So again I ended up going the 3rd party route, and I'm using a fork of a library called pngcs that someone helpfully made to read/write from Unity Texture2Ds. It's unfortunately pretty slow, but it does work. I'll have to figure out a faster solution in the future.
I also got Unityed by the UIToolkit ScrollView being frustratingly half-baked. No real solution there yet, it's just going to have to feel bad for a while.
I'm pretty close to being able to practice art with this thing, I think. Technically I could do it now, but I guess I want undo/redo support, a color picker, and just some basic UI quality-of-life stuff first.
I finally figured out my iPad glitch problems - turns out I was using 16-bit precision in a shader to do my vertex transformation, and my vertex coords for drawing brush strokes are just pixel coords into the artwork. When they got big, like x >= 2000, it was losing some precision, I guess in the matrix transformation. I switched it to use 32-bit floats and now it looks pretty consistent (fingers crossed), as you can see from my lovely screenshot.
Still having some crashes though, so I turned on the 40-minute debug cloud builds (sad face) to try and figure that out. I also made a quick gallery scene so you can set your screen size, and did a save-to-photos button that pulls the data off the GPU and saves to a PNG.
Beyond that I've been trying to work more with UniRx, which is a reactive programming library for Unity that I've been meaning to learn for a while. It brings some of the concepts I like from React (the Javascript framework), which for me makes UI programming so much nicer. There's a learning curve though, and it doesn't have all the really good stuff from React/MobX like computed properties and the simple render() concept, but I guess it's probably tough to do that in a game engine where performance is such an issue.
UniRx lets you have reactive properties, so essentially a variable that sends events whenever you change it. So you can hook up your UI to your variables and vice versa, and then you don't have to worry about keeping UI in sync with your model. It can do a lot of other things too like throttling/debouncing; I'm still trying to wrap my head around the extent of it. Even if all I get out of it is the reactive properties though, I think it still would be really valuable to me.
I have had some time to work on the paint app recently, but iOS platform differences are driving me crazy...somehow on the iPad there are visual glitches that don't show up on desktop at all. It's really hard to troubleshoot, so instead I just come up with hunches and spend a bunch of time implementing them. But so far those have just changed the nature of the glitches and not completely fixed them. For a while I thought maybe iOS only allows power-of-two RenderTextures, or has other RenderTexture size limits (even though I can't find documentation saying that). But now I'm using a totally tile-based system where the canvas is split up into a bunch of power-of-two tiles, and somehow there are still visual glitches on iPad when you draw at x >= 2048, even though there's no technical difference between those tiles and the ones before them. It's really baffling!
I've also been trying to figure out what other paint apps are doing to improve stroke appearance. I think there are a lot of little tricks going on, especially when you zoom in really close. A stroke is actually made up of tons of little texture stamps that are interpolated along the path of your stylus events, so there needs to be smoothing, spacing, blending, etc. Apple sends you estimated/predicted stylus events and then updates them later, so you need to draw the stroke and then update it when new data comes in (that has to be done for stroke smoothing/stabilization too). It switches between bilinear and point filtering depending on the DPI, so you can zoom out and have things look smooth, but zoom in and see individual pixels.
Possibly there's a separate antialiasing step that needs to be done on the stroke to reduce the visibility of the individual brush textures too, but I'm not sure...I think at least there's some blending other than what's possible with Unity's blend modes. There's a lot going on for just a simple brush stroke!
Here's a screenshot of the latest in a long line of iPad glitches. Circles to the left of the red 2048px line are all identical, circles to the right sometimes get scaled incorrectly.
WIP of Femto, as of writing this.
I was recently working on some terrain tools for the Unity editor, and while using the stylus to draw terrain, I realized I wasn't that far away from writing a paint app. Since I spend a lot of time drawing on my iPad, I got really excited about being able to practice drawing and painting in my own app where I could have the UI and toolset exactly how I wanted it. So I pivoted to doing that instead.
There are a few interesting hurdles that I've encountered so far:
I think it would be fun to write a short blog post on each of those topics, so this post is about the first hurdle.
Unity is meant for making games, which are usually fullscreen experiences where the screen is completely redrawn as fast as possible (ideally 60 times per second or more). For games, it's okay to use a ton of CPU and GPU resources - the user isn't expecting to multitask anyway. Crucially, that means a game will probably drain your battery much faster on a tablet than an app would, which is no good for a paint app that you might be working in for hours.
An application doesn't have this need to refresh the screen as fast as possible, so it would be designed to refresh only when needed, or even to refresh only the parts of the screen that have changed. Unity just doesn't work that way.
So why write an app in Unity at all? It's probably not a good idea, but I can think of a few reasons:
In my experience, paying too much attention to #1 above is usually a mistake, and it's better to use the right tools for the job than to limit yourself to your existing skill-set. But I think reasons #2 and #3 are interesting, and I'm just doing this for fun anyway.
If you're like me, your first instinct might be to just set Application.targetFramerate
to something low, like 1 FPS, when the user is not actively doing something. But the issue there is that Unity only handles input events during Update()
, so if you do that there will be a noticeable delay from when the user tries to do something, to when your app notices it and is able to bring the frame-rate back up.
Fortunately, Unity does give us another option. This article at secretlab.institute explains how to shut off Unity's core game loop when idle, which means your program will only use CPU and GPU resources when the user is actively doing something. That alone should improve battery life by a lot, assuming the user isn't constantly interacting with your app.
(Edit - As of 7/22 I'm no longer using the below technique, since I learned about Unity's UnityEngine.Rendering.OnDemandRendering feature - I'd recommend looking into that first, since it is a simpler way of doing more or less the same thing!)
The linked technique is a bit complicated though, since it requires writing native plugins for each supported platform in order to detect when the user has moved the mouse or touched the screen, in order to turn Unity's game loop back on again. I found that with Unity's "new" Input System at least, that's no longer required and you can do it without writing any native code. The trick is to leave some Unity SubSystems running even when idle - in particular, the ones that allow Unity to detect user input and tell your code about it.
Okay, but there's still a problem - now the SubSystem that regulates Unity's framerate is disabled. Since Unity no longer has to wait for VSync or for the GPU to render, the remaining SubSystems run as fast as possible and use 100% CPU! I was able to work around that by inserting a simple 1ms delay to give the CPU a break.
On Windows, this approach seems to work quite well. Resource usage goes to near 0 whenever the user is not doing anything, and responds instantly when the user touches the mouse or stylus. I therefore assume it is working on iPad, since anecdotally the battery usage of my app seems comparable to other paint apps. However, I don't know how to accurately test the power usage on iPad yet, so take this with a grain of salt. On Linux, I think there is still an issue and something about this approach isn't working perfectly, but I haven't had time to research it yet.
If you want to try this technique yourself, here's the code I'm using.