To help share the vision for Asteria, we created a video where Dan Gailey talks about the product and long-term goals. Presenting the vision in a format like this was a great way to simplify and describe what Asteria is. #startup #artificialIntelligence #ml #ai #blockchain
Quick update: I’m reconciling some architecture change. (Moving the companion/router software to be instantiated by the boot loader ) and having all message brokerage happening through “Watcher” (just brokers messages and global device contexts and modes to all agents on the system).
After this, I’ll be abstracting the template app from the “Collector” app we’ve created so developers can get up and running on the device emulator software quickly.
Should be done by Sunday, then a quick test to make sure registration, synchronization, and notifications work from device <-> platform, then we should be releasing our first extremely early alpha version.
I’ll then work on docs and laying out the goals of the architecture and software for everyone, so people can participate how they’d like.
We’ve also just expanded the team with a Mech Engineer, and we’re also putting up a job spec for Conversational Theorist. (see Pask)
Asteria is your artificial intelligent companion that you carry with you. It sees what you see, hears what you hear, takes in life as you do, and gets smarter all along the way. Asteria connects the dots between your physical and digital life.
As we continue to work on our product, we will continue to share updates and deep learnings into our product roadmap here.
Working on the concurrency / parallel programming architecture for the device software. Main loop: runs concurrent processes assigned to each control system / interface (eg. checking for messages/emails/data return from bots, conversation), main loop is used to check for incoming events and brokers messages for the other processes.
each concurrent process / coroutine can be controlled via messages from the main control loop.
Developing the device software architecture and reading this book: https://www.amazon.com/Distributed-Computing-Python-Francesco-Pierfederici/dp/1785889699
Standard model of cognitive architecture link here: https://www.facebook.com/photo.php?fbid=1056720574394198&set=gm.1737315873173912&type=3&theater
So here is logo #2 with a few colors and sizes to show its use. My next step will be picking colors for the branding, and placing this on a comp of the main landing page to show how it looks. I also replaced the "s". Once I have colors/etc. picked I will go through this logo with a fine comb to fix letter spacing and any pixel issues. #logo #design #branding #fonts
Here are my latest designs. I had to freehand the logo icon to create a vector version since I could not find a spiral plugin for Illustrator that isn't 10 years old haha. I'll figure out better options to create this in the future, but this should work. The next step I did was do a weight and font exploration. I have my favorite but let me know what you think of the 4 I chose as finalists Andy Shimmin and Dan Gailey . And fonts look rasterized if you expand, so try to look past that. The next step after this is to clean up the option chosen, throw in some colors, and finalize. #logo #branding #design
The original logo sketch. As noted in a private conversation, the end nodes should be wide, with the internal area balanced with the visual strength of the nodes. I thought 7 nodes would be good, but the piece of software I suggest we use: http://nathanfriend.io/inspirograph/ has only 6 pointed symbols at 144/96 + 80 at 13th dot from the outside.
Some of the early notes I mused up thinking about how the communicator might function. It ties back into the idea of ambient intelligence, quantified living, the future of labor, autonomous agents and systems:
Enables every human to become part of the autonomous agent network, utilizing AI to guide human labor. “Pick this up”, “Get on this bus”, “Drop this off” takes command driven human capital to another level. Let the computer run your life.
Start with “In the future”
Helping people find their place in the universe.
Efficiently place/fit humans in an increasingly autonomous world, use video/camera data to train robotics to one day take over those jobs. Deep learning to encode user’s voice to device. Humans have been forever used relative to technology to gap the inefficiency in work output until technology has advanced enough to cover that gap for us. This cycle repeats itself until the technology has surpassed man’s ability to discover, adapt, and exploit those inefficiencies between resource/input and some marketable output. We now work in harmony within our autonomous system.
Start building consumer / life profiles for users for artificial intelligent assistants.
Some additional musings of the project. Here you can see my thinking about the product experience, single and multi touch buttons, audio namespaces, audio bots and their relevance to traditional bot/agent implementation, and device modes. Additionally some thought to the presentation outlines. This was literally on the back of an envelope.
Project 3D printed from stardust!