Axon
Hello Axon!
I have created the following content to demonstrate my strategic, analytical, and visionary capacities in pursuit of the Principal AI Product Designer position.
ideation:
Compared to typical ideation exercises, I’ve worked in reverse here — rather than starting with a blue sky vision informed by user research, I began with taking inventory of current and forecasted technological capabilities. While I am compelled to take this approach due to a lack of access to users (officers) and research insights, I’ve also found this approach is, in general, a very useful jumping off point for the blue sky vision of new technologies.
So let’s get started!
1) Bodycam Data Collection + AI Inference
Auto-labeling of subject ID:
Allows the bodycam to utilize facial recognition to passively query criminal photo databases, which, when coupled with high priority alerts (warrants etc.) can surface information to the officer at the time of need, improving situational awareness in real-time.
Auto-labeling of scene objects to aid evidence retrieval:
Allows for queries such as "Find video assets related to the recent hit-and-run with a red pickup."
Audio capture of voices matched to subject ID — ground truth established by video of the individual while speaking:
Enriches transcription and positively identifies out-of-frame speakers.
ML models for optical classification of captured media:
Shoe print registry / tire mark registry / tool mark registry, automotive paint color registry, etc.
Again, can be continuously running as a background process, ingesting new registry entries while looking for matches to unresolved cases, and upon a match, sending a notification to all law enforcement personnel associated with the case for human review.
ML models for matching suspect gait:
Gait can be a reliable ID — body, vehicle or CCTV camera feeds automatically and continuously comparing video of individuals’ gaits cross-referenced with persons of interest.
OCR trained against individual officer handwriting (for those LEA who prefer handwriting):
Officer handwrites “The quick brown fox jumps over the lazy dog”, then captured by camera.
Allows field notes to be reliably transcribed to standardized digital formats and LEA forms.
2) Asset Standardization + AI Interoperability:
Text / Video / Audio encoders:
Procedural operations homogenize all rich media evidence for interoperability between LEA RMS implementations
Transform documentation from all police precincts into a universal format for both LEA & AI models:
Background or in-situ scanning of documents / screens / evidence bags / video → CV subject and object ID predictions → text / labels / metadata extracted → SLM/LLM preprompt instructions to identify keywords / fields of interest (e.g. "VIN", "Charge", "Age”) to then determine appropriate universal form type (evidence chain of custody, traffic stop, etc)
Personnel-in-the-loop approval ✓
If other agencies use alternative records management systems and are not able or willing to adopt Axon, modify an existing LLM transformer to passively ingest their evidence and make it available to the Axon ecosystem.
More here to consider and elucidate, but in the interest of time…
3) Asset Discoverability & Insights:
Database matching officer number and name to accelerate communication and asset retrieval
Allows for queries such as “connect me with detective Nguyen at the North Seattle precinct”.
Natural language processing in concert with LLM to decrease retrieval time
"Pull up my traffic stops from last Thursday on Aurora Ave"
“Connect me with the detective assigned to the hit-and-run case I filed in Q4 of 2022.”
“4 matches found, do you recall any other details of the case?”
“I believe there was a blue sedan involved.”
“Match found. Detective Marcus Johnson — connecting now.”
Recommendation algorithm to find similar or directly related assets, e.g. case files of subject ID
Upon reaching an acceptable confidence interval of subject ID, notification is sent to LEA personnel to review insights.
LLM parsers independently query the Axon Evidence database to provide recommended COAs:
Essentially, LLMs + ML models = continuous background processing of evidence to extrapolate missed-connections between cases, persons of interest, police and prosecution personnel, and evidence, 24/7 without human input (obviously compute is a concern).
LLM multi-disciplinary bot army:
IBM recently unveiled a multi-disciplinary multi-agent approach to resolving complex problems (e.g. an LLM instructed to evaluate problems from a physicist agent, then ask the same of a mathematician agent, then instruct a higher level agent to solve the original problem using the insights from both the physics and mathematician agents.
This may come in the form of:
Officer agent
CSI agent
Records conformance agent
Rights conformance agent
Paralegal agent
County Clerk agent
Criminal defense attorney agent
Prosecutorial agent
Judge agent
Upon ingestion of insights from each agent, an executive summary of insights and recommended courses of action is produced with significantly improved accuracy and logic compared to a single universal agent.
Reveal previously unforeseen issues with case files — evidence chain of custody, clerical errors, etc.
Reveal historical case precedence insights
Case strategy recommendations
Attorney / judge profile generation to anticipate rulings based on previous similar cases (feeds into case strategy)
A lot more here to consider and elucidate, but in the interest of time…
4) Data Sharing:
Inter-agency "Follow-Up Service" to recursively identify assets which should be shared with other agencies or officers with historical relevance.
E.g. Another agency or officer was part of a previous case involving the individual I stopped in traffic today — this information is sent as a notification to the relevant parties.
AR/VR playback of crime scene investigation:
CSI sets up a few off-the-shelf RGB + Depth cameras at perimeter of scene, scene origin offsets of each camera position and orientation allows for “stitching” of RGB + depth data (think point cloud), then playback on AR/VR headset to recreate a traversable spatial re-creation of the entire scene investigation.
Roughly 5 years ago I had direct experience working with software which does this “3D stitching” in realtime with 2 Microsoft Kinects — the technology has only advanced since then.
A lot more here as well, but just scratching the surface today!
Final ideation thoughts:
Of course, this is just the beginning of formulating a cogent strategy and vision. Possibilities have been identified which we can now leverage in pursuit of composing a holistic, realistic, vision of the future with executional assurances. Again, that said, I’d love to approach this from the blue sky angle informed by user research.
UI mockup
Considering I do not have SAAS UI work on my site, I decided to make a quick mock up a home screen of a completely imaginary portal. The goal of this exercise was simply to demonstrate basic UI / Visual design and some IA. Designing a great UX would of course require intensive user research, field touchpoints, design workshops and exercises, discussions, etc.
Please let me know if you’d like me to produce any additional mock-ups in relation to any feature(s) from the Strategy / Ideation exercise. I’m happy to prove I have what it takes for this role.
Click for full-size
Thank you for your consideration, Axon! I hope I have piqued your interest in my candidacy. I entered the IVAS program at Microsoft to improve training outcomes in pursuit of saving lives, and it would be my honor to help Axon do the same.
-Jeremy