University of Minnesota-Twin Cities
complexe, mais pas compliqué
Join LinkedIn & access Adam's full profile
Specialties: Scala, Java
Design for scalability, reliability, availability
Lean Startup and Agile development methodologies
Continuous deployment practices
Performance engineering and monitoring
Headache Removal Systems
Cloud systems using Scala (Akka, HTTP+REST+JSON) and datastores like Postgres, Cassandra and Elasticsearch.
Member of Engineering Services group responsible for building highly leveraged tools, frameworks and services for all engineers in the company.
Authored, extended and maintained Box's backend service framework, used to build over 20 services since inception. Written in Scala, it provides lifecycle management, configuration facilities, service discovery, application metrics, REST API support, etc., similar to Coda Hale's "Dropwizard" and Twitter's "Finagle" frameworks.
Authored, extended and maintained the "Deployment Manager" service (Scala), used to deploy all other services, including itself. Originally built to deploy only JVM-based services, it was extended to deploy "fat jar"-, tarball- and rpm-based artifacts that use multiple service announcement mechanisms. Operator control via REST API, web UI and XMPP bot. Offers flexible deployment plan definition and inspection, fast rollback support, "canary" sequencing, pluggable intervention mechanisms (abort/retry/fail) when problems occur, Maven artifact repository integration, etc. Co-author of "Configuration Manager" service (Scala) offering a publish/subscribe model for service configuration. Maintainer of Maven artifact repository and Scala continuous integration via Jenkins.
Advocate, mentor, and teacher for effective use of Scala: constant IM presence, code reviews and workshops, readability guidelines, internal tech talks, etc.
Metamarkets: Extended an in-memory data analysis system to use memory-mapped data sources. (Java)
Box: Implemented base libraries for backend services and the start of a continuous deployment system. (Scala)
Disrupting the $10^12 mutual fund market. My main work centered on:
Infrastructure: Integrated the Apache Zookeeper coordination service for service discovery, distributed locking, leader election, etc.; integrated RabbitMQ message queue for service decoupling and collection/consolidation of service metadata; enhanced proprietary "Query Engine" RPC system at all layers; data center maintenance for machine consolidation, service replication and relocation; database updates and migrations; wrote lots of good code and removed lots of bad code; ....
Financial Data Analytics: For 1 year I was responsible for all daily performance, attribution analysis, risk metrics and money manager scoring calculations, third-party data integration, monitoring and reporting systems, and improvements to analytics driven from the CEO, sales team and CTO.
Customer Analytics: Built analysis tools and reporting for understanding our customers' behavior using session classification, a generalized "funnel" analysis method, behavior correlations, custom clustering, business metrics and trending.
In addition I've been an active promoter of Wealthfront's engineering practices such as Continuous Deployment through various speaking engagements and blog posts, mentored other engineers and tried to improve our product as much as possible by both rhetorical and rational arguments.
Member of Platform group responsible for Ning's 30+ back-end services.
Created a Amazon-Dynamo-style distributed hash table for a new activity stream service, which I also built. Created, integrated and deployed instrumentation infrastructure. Enhanced custom web-chat service. Introduced custom caches for various backend services.
Member of Platform team responsible for proprietary synchronization protocol and server components. Former lead of Web team in 2005.
Created next-generation storage system for data center: HTTP storage service with flexible backings for extreme scale-out. Converted static thumbnail generation daemon to a stateless HTTP service for improved scale-out and flexible image conversion. Wrote Sharpcast Native SDK: win32, linux, macosx; complete documentation; led integration with third-party photo management software vendor. Enhanced sync platform: secure authentication, configuration validation, protocol and agent versioning, outgoing HTTP proxy tunnelling, SSL support, Berkeley DB indexing abstraction for query performance. Created flexible metrics framework and monitoring system used in all data center processes. Created platform/Java bridge for web server; created web platform using standard Java frameworks: data objects, managers, controllers, views, build system.
Member of Applications and Architecture Group in the Computing Science Laboratory.
Architected and built an advanced and usable environment (IDE) for the development and deployment of information extraction models. Developed web-based collabrative software for a large-scale ethnographic fieldwork project. Enhanced, tested and deployed novel traffic clustering and analysis system to compute aggregate web site user behaviors. Architected and built automated web site usability measurement and traffic simulation system from research prototype.
Core member of Search Team tasked to create a scalable, reliable fulltext search system to include Outride's proprietary relevance technology. Outride was a spin-out of Xerox PARC and was acquired by Google in 2001.
Architected and built Outride Query Server, indexing system, Outride Relevance Engine from R&D prototype. Instrumented search subsystem, developed statistic gathering/graphing package. Performed detailed performance engineering on distributed system.
Responsible for the technical leadership of many projects.
Re-architected and refactored data replication system. Redesigned data extraction and translation subsystem to increase performance and reliability. Developed and managed technical support for client services division interfacing to engineering .
According to usability experts, the top user issue for Web sites is difficult navigation. We have been developing automated usability tools for several years, and here we describe a prototype service called InfoScent™ Bloodhound Simulator, a push-button navigation analysis system, which automatically analyzes the information cues on a Web site to produce a usability report. We further build upon previous algorithms to create a method called Information Scent Absorption Rate, which measures the navigability of a site by computing the probability of users reaching the desired destinations on the site. Lastly, we present a user study involving 244 subjects over 1385 user sessions that show how Bloodhound correlates with real users surfing for information on four Web sites. The hope is that, by using a simulation of user surfing behavior, we can reduce the need for human labor during usability testing, thus dramatically lower testing costs, and ultimately improving user experience. The Bloodhound Project is unique in that we apply a concrete HCI theory directly to a real-world problem. The lack of empirically validated HCI theoretical model has plagued the development of our field, and this is a step toward that direction.
Web Usage Mining enables new understanding of user goals on the Web. This understanding has broad applications, and traditional mining techniques such as association rules have been used in business applications. We have developed an automated method to directly infer the major groupings of user traffic on a Web site [Heer01]. We do this by utilizing multiple data features in a clustering analysis. We have performed an extensive, systematic evaluation of the proposed approach, and have discovered that certain clustering schemes can achieve categorization accuracies as high as 99% [Heer02b]. In this paper, we describe the further development of this work into a prototype service called LumberJack, a push-button analysis system that is both more automated and accurate than past systems.
Over 300 million professionals are already on LinkedIn. Find who you know.