• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle

  • Different financial institutions (FI) will all have different appearances, because of the nature of how MX is implemented, and whether on desktop or mobile. In the case of my credit union, it’s right here:

    The interface of MX Platform on desktop looks like this:

    You might see something like this in your online banking home page:

    There are two ways that MX can get data from other accounts which you have to explicitly link in your bank/CU interface. The first method is through Open Banking protocols, which are mercifully obfuscated from the end user. Seriously, if you’re having trouble sleeping, try reading some of the Open Banking specifications. :D One selects their FI from the list, and enters creds and 2FA challenge. The other method is screen-scraping, but again this is abstracted away from the end user.

    One of the features where MX slaps more than anyone else (for now) is identifying the source of debits and classifying them. Underneath the hood, debit and credit card transaction strings are chaos. But even if MX gets it wrong, you can manually re-classify your expenses, and it will apply that to future transactions (optional). I already mentioned the burndowns, but if you have an idea for a saving schedule, MX will provide reminders and factor in your growth. Platform will also provide reminders for almost everything.

    Let me know if you have any other questions.




  • As others have said, a spreadsheet is the simplest. If you do your banking with a credit union, chances are they make MX available to you in your online banking. A lot of banks use MX too. Their software provides the projections and forecasting you seek, as well as Open Banking connections to all of your other accounts. If you have loans, it also has burndowns of outstanding debts. Extra bonus: MX doesn’t sell your data.

    Disclosure: I used to work for MX.




  • By the same logic, raytracing is ancient tech that should be abandoned.

    Nice straw man argument you have there.

    I’ll restate, since my point didn’t seem to come across. All of the “AI” garbage that is getting jammed into everything is merely scaled up from what has been before. Scaling up is not advancement. A possible analogy would be automobiles in the late 60s and 90s: Just put in more cubic inches and bigger chassis! More power from more displacement does not mean more advanced. Continuing that analogy, 2.0L engines cranking out 400ft-lb and 500HP while delivering 28MPG average is advanced engineering. Right now, the software and hardware running LLMs are just MOAR cubic inches. We haven’t come up with more advanced data structures.

    These types of solutions can have a place and can produce something adjacent to the desired results. We make great use of expert systems constantly within narrow domains. Camera autofocus systems leap to mind. When “fuzzy logic” autofocus was introduced, it was a boon to photography. Another example of narrow-ish domain ML software is medical decision support software, which I developed in a previous job in the early 2000s. There was nothing advanced about most of it; the data structures used were developed in the 50s by a medical doctor from Columbia University (Larry Weed: https://en.wikipedia.org/wiki/Lawrence_Weed). The advanced part was the computer language he also developed for quantifying medical knowledge. Any computer with enough storage, RAM, and the hardware ability to quickly traverse the data structures can be made to appear advanced when fed with enough collated data, i.e. turning data into information.

    Since I never had the chance to try it out myself, how was your neural network and LLMs reasoning back in the day? Imo that’s the most impressive part, not that it can write.

    It was slick for the time. It obviously wasn’t an LLM per se, but both were a form of LM. The OCR and auto-suggest for DOS were pretty shit-hot for x386. The two together inspried one of my huge projects in engineering school: a whole-book scanner* that removed page curl and gutter shadow, and then generated a text-under-image PDF. By training the software on a large body of varied physical books and retentively combing over the OCR output and retraining, the results approached what one would see in the modern suite that now comes with your scanner. I only achieved my results because I had unfettered use of a quad Xeon beast in the college library where I worked. That software drove the early digitization processes for this (which I also built): http://digitallib.oit.edu/digital/collection/kwl/search

    *in contrast to most book scanning at the time, which required the book to be cut apart and the pages fed into an automatically fed scanner; lots of books couldn’t be damaged like that.

    Edit: a word


  • No, no they’re not. These are just repackaged and scaled-up neural nets. Anyone remember those? The concept and good chunks of the math are over 200 years old. Hell, there was two-layer neural net software in the early 90s that ran on my x386. Specifically, Neural Network PC Tools by Russell Eberhart. The DIY implementation of OCR in that book is a great example of roll-your-own neural net. What we have today, much like most modern technology, is just lots MORE of the same. Back in the DOS days, there was even an ML application that would offer contextual suggestions for mistyped command line entries.

    Typical of Silicon Valley, they are trying to rent out old garbage and use it to replace workers and creatives.




  • They were acquired by Opta Group in 2023. Since then, the quality has declined while prices increased. And around the time of their acquisition, they started doing some shady stuff when claiming USB-IF compliance. The cables were blatantly not USB-IF compliant.

    Another example: I personally love my Anker GaN Prime power bricks and 737. Unfortunately, among my friends and peers, I am the exception. The Prime chargers are known for incorrectly reading cable eMarkers and then failing to deliver the correct power. This has so far been an issue for me twice, but was able to be worked around.