HowWorks
HowWorks

Everything begins with understanding.

Type your idea to discover matching projects. Start with what's already great so you never have to build from scratch. Build and inspire together—because greatness is never achieved alone.

Explore/DeepDive/

selop/pokebox

This is a deep technical analysis of pokebox.

pokebox

pokebox

0.0k

This project displays Pokemon TCG cards utilizing ThreeJS and Off-Axis-Projection

byselop
Detail →
Report Contents
Product — positioning, core features & user journeys
Assessment — architecture, tech stack & implementation
Assets — APIs, data models & key modules
Suggested Actions
Copy Prompt to send key insights to your AI coding assistant
Bookmark the project card to your workspace
Ask any follow-up questions below to dive deeper
AI-Generated • Verify Details
Knowledge Base
Code-to-Docs
selop/pokebox
@2a11d7d · en

How selop/pokebox Works

Pokebox is a technologically ambitious fan project that elevates the concept of digital card collecting. Unlike static image galleries or simpler CSS-based effects, it uses a sophisticated WebGL rendering pipeline (GLSL shaders) combined with real-time user motion tracking (via MediaPipe for head tracking or a gyroscope for device tilt). Its core competitive advantage is the creation of a 'parallax window' illusion, making the screen feel like an opening into a physical box of cards that react realistically to the user's movement, a feature not commonly found in web-based fan projects.

Overview

Pokebox is a technologically ambitious fan project that elevates the concept of digital card collecting. Unlike static image galleries or simpler CSS-based effects, it uses a sophisticated WebGL rendering pipeline (GLSL shaders) combined with real-time user motion tracking (via MediaPipe for head tracking or a gyroscope for device tilt). Its core competitive advantage is the creation of a 'parallax window' illusion, making the screen feel like an opening into a physical box of cards that react realistically to the user's movement, a feature not commonly found in web-based fan projects.

To recreate the tactile and visual experience of holding and tilting a physical holographic Pokémon card, using a standard web browser and hardware like a webcam or gyroscope to create a realistic, motion-reactive 3D effect.

How It Works: End-to-End Flows

Experiencing Parallax with Head Tracking (Desktop)

This is the primary user journey on a desktop, showcasing the project's most innovative feature. A new user lands on the site and is guided through enabling their webcam. Once active, their head movements are translated in real-time to the 3D scene's camera, creating a magical 'window' effect where the holographic cards and box appear to have real depth and shift realistically with the user's perspective. It's a hands-free, intuitive flow that delivers the core value proposition of a tangible, physical-like viewing experience.

  1. User grants camera permission via the onboarding modal
  2. System activates head tracking and begins processing video frames
  3. User moves their head, and the system updates the virtual camera's position
  4. The holographic shaders on the card react to the new camera angle, shifting highlights and colors
  5. User experiences the parallax and holographic effects in real-time

Interacting with Cards in Fan Mode

This flow demonstrates the application's rich interaction design for browsing a collection. The user switches to 'Fan' mode, where cards are displayed in an elegant arc. As they hover their mouse over different cards, the holographic shader activates, giving a preview of the effect. Clicking a card initiates a smooth zoom animation, bringing the selected card to the forefront while artistically blurring the background using a depth-of-field effect. This flow highlights the seamless blend of layout, animation, and post-processing to create a polished and satisfying browsing experience.

  1. User switches the display mode to 'Fan'
  2. User hovers the mouse over a card, causing its shader to activate
  3. User clicks a card, triggering a zoom animation
  4. The system automatically applies a depth-of-field effect to blur other cards during the zoom
  5. User clicks in empty space to zoom out, returning the card to the fan layout

Opening a Booster Pack

This flow captures the excitement of discovering new cards. The user selects a booster pack from a modal, which triggers a cinematic sequence. While a visually engaging animation of the pack opening plays in the foreground, the application concurrently fetches the required card data and textures in the background. This parallel process ensures that even on slower connections, the user is entertained during the load time. The flow culminates in a satisfying reveal as the new cards cascade into view, ready for inspection.

  1. User opens the Booster Pack modal and selects a pack
  2. System plays a multi-stage visual animation of the pack opening
  3. System simultaneously loads the corresponding card set data and textures
  4. Once both the animation and loading are complete, the new cards are revealed in the scene

Sharing and Viewing a Specific Card

This user flow highlights the product's shareability. A user browsing the collection finds a card they want to share. They use the 'Share' button, which generates a unique URL for the current view. They can then send this link to a friend. When the recipient opens the link, the application bypasses the default landing experience and immediately loads the exact set and card that was shared, providing a seamless and context-rich entry point into the application.

  1. User finds a card and clicks the 'Share' button in the toolbar
  2. System uses the browser's native share function or copies the unique URL to the clipboard
  3. A second user opens the shared URL in their browser
  4. The application parses the URL parameters ('set' and 'card') on startup
  5. The system loads and displays the specific card from the URL, bypassing the default view

Key Features

Holographic Rendering Engine

This is the core of the product, responsible for creating the illusion of physical, holographic cards. It uses a custom graphics pipeline to simulate how light interacts with foil and etched materials. The design strategy involves combining an off-axis camera projection, which creates a 3D parallax effect, with a variety of custom shaders that dynamically alter a card's appearance based on the user's viewing angle. The result is a realistic, interactive experience where cards shimmer and shift as if held in the hand.

  • Multi-Style Holographic Shaders — 【User Value】Ensures that cards of different rarities and types have visually distinct and authentic holographic effects, mirroring the variety found in the physical trading card game. 【Design Strategy】A mapping system is used to assign a specific graphics shader to a card based on its metadata (rarity, foil type). This allows for a scalable way to support dozens of unique visual styles without hardcoding logic for each card. 【Business Logic】 - Step 1: During the loading of a card set, each card's metadata (e.g., 'designation', 'foilType', 'tags') is analyzed. - Step 2: A rules engine maps these properties to a specific shader style. For example: - A card with 'RAINBOW' foil and an 'ETCHED' mask is assigned the 'master-ball' shader. - A card with the 'ULTRA_RARE' designation is assigned the 'ultra-rare' shader. - A 'SHINY_RARE' card gets the 'shiny-rare' shader. - A card with 'FLAT_SILVER' foil and 'REVERSE' type gets the 'flatsilver-reverse' shader. - Step 3: When the card is rendered, the system constructs it with the assigned shader, which contains the unique logic for that particular holographic effect.
  • Parallax 'Window' Effect — 【User Value】Creates a strong illusion of depth, making it feel like the user is looking through a window into a real 3D space containing the cards, rather than just looking at a flat screen. 【Design Strategy】The 3D camera's projection is dynamically distorted based on the user's calculated eye position relative to the screen. This technique, known as off-axis projection, simulates how perspective changes in the real world when a viewer moves their head. 【Business Logic】 - Step 1: The system receives the user's eye position in 3D space (derived from head tracking, gyroscope, or other inputs). - Step 2: On every frame, it calculates a custom perspective projection matrix. This is done by defining a viewing frustum (the 3D viewing volume) that is skewed towards the user's eye position, instead of being symmetrical. - Step 3: This skewed projection matrix is applied to the scene's camera. As a result, objects closer to the user appear to move more than objects farther away, creating a natural parallax effect.
  • Cinematic Post-Processing Effects — 【User Value】Enhances the visual quality and realism of the scene with effects like 'Bloom' (which makes bright areas glow, mimicking light dispersion) and 'Depth of Field' (which blurs the background to focus attention on a specific card). 【Design Strategy】An effect composition pipeline is used to apply a series of screen-space graphical filters after the main 3D scene is rendered but before it is displayed to the user. These effects can be enabled, disabled, and adjusted in real-time. 【Business Logic】 - Step 1: The 3D scene is rendered to an off-screen buffer. - Step 2: A 'Bloom' pass is applied. If enabled in the settings, it identifies the brightest parts of the image, blurs them, and adds them back to the original image to create a glowing effect. The intensity and size of the glow are user-configurable. - Step 3: A 'Depth of Field' pass is applied. It blurs parts of the scene that are not in focus. The focus point automatically tracks the main card being viewed or a user-specified distance. The amount of blur is controlled by an 'f-stop' setting. - Step 4: An 'Output' pass applies final color corrections and tone mapping before presenting the final image to the user. This ensures consistent brightness and color, regardless of which effects are enabled.

Multi-Modal Interaction System

This module provides a flexible and device-appropriate system for controlling the 3D scene. The core design is to abstract various hardware inputs (webcam, gyroscope, mouse, keyboard) into a single, unified 'eye position' state. This decouples the rendering engine from the input source, allowing the holographic effects to be driven by head movement on a desktop, phone tilting on mobile, or even manual keyboard presses, providing a seamless experience across different platforms.

  • Webcam Head Tracking — 【User Value】Offers a hands-free, intuitive way for desktop users to interact with the holographic cards simply by moving their head, creating the most immersive experience. 【Design Strategy】Leverage a browser-based machine learning library (MediaPipe) to perform face detection directly on the user's device, converting detected head position into 3D coordinates without sending any video data to a server. 【Business Logic】 - Step 1: After the user grants camera permission, the application starts processing the video stream. - Step 2: On each frame, the face detection model identifies the user's face and returns a bounding box. - Step 3: The center of this bounding box is translated from 2D screen coordinates into a 3D 'target eye position' in the virtual world, factoring in user-calibrated settings for movement sensitivity. - Step 4: The application's central state is updated with this new target position, which the rendering engine then uses to drive the parallax effect.
  • Mobile Gyroscope Tilt Control — 【User Value】Provides a natural interaction method for mobile users, allowing them to explore the holographic effects by physically tilting their phone. 【Design Strategy】Use the device's orientation sensor (gyroscope) as the input source. Apply a calibration step and spring physics to smooth the raw sensor data, providing a fluid and responsive feel. 【Business Logic】 - Step 1: After the user grants permission (required on iOS), the application starts listening for device orientation events. - Step 2: The first event received is used to establish a 'rest position', calibrating the system to the user's initial holding angle. - Step 3: Subsequent tilt data (front-to-back and left-to-right) is normalized to a standard range (-1 to 1) based on a maximum tilt angle of 15 degrees. - Step 4: The normalized values are passed through a spring-damping physics model to smooth out jitter from raw sensor data. - Step 5: The smoothed output simultaneously drives both the camera's 'target eye position' (for parallax) and the individual card's rotation (for a direct tilt effect).
  • Mouse, Keyboard, and Touch Fallbacks — 【User Value】Ensures the application is fully usable for all users, including those who decline camera/gyro permissions or are on devices without them. Provides precise manual control options. 【Design Strategy】Implement a set of fallback input handlers that manipulate either the card's rotation or the camera's position directly. 【Business Logic】 - **Mouse Pointer**: Moving the mouse over the canvas tilts the card. Pointer coordinates are normalized and scaled, then fed into a spring-physics model to create a smooth rotation. The card returns to center when the mouse leaves the canvas. - **Keyboard**: Arrow keys are used to move the camera position left/right and up/down by a fixed step (8% of screen height). 'W' and 'S' keys move the camera forward and backward. - **Touch Swipe (Stack Mode)**: On mobile devices in 'stack' view, a vertical swipe gesture is used to flick through cards. A swipe is only registered if it meets specific criteria: a minimum distance of 50 pixels, a maximum duration of 500 milliseconds, and a minimum velocity of 0.3 pixels per millisecond.

Card Presentation & Navigation

This module governs how cards are displayed and how users interact with them. It supports multiple, distinct layout modes to suit different use cases, such as showcasing a single card, fanning out a collection, or revealing cards from a booster pack. The system includes custom animations and interaction logic tailored to each layout, creating a dynamic and engaging user journey.

  • Multi-Mode Card Layouts — 【User Value】Provides different ways to view cards, from detailed inspection of a single card to browsing a large collection, enhancing the discovery and collection experience. 【Design Strategy】A central 'display mode' state dictates which layout algorithm is used to arrange the cards in the 3D scene. Switching modes triggers a scene rebuild with the new layout. 【Business Logic】 - **Single Mode**: Displays one card, centered and large, for detailed inspection. - **Fan Mode**: Arranges up to 7 cards in an elegant, curved fan layout, ideal for showcasing a variety of cards. - **Stack Mode**: Presents cards in a vertical stack, designed for mobile and navigated via swipe gestures. - **Carousel Mode**: Shows a curated set of 'hero' cards in a rotating 3D carousel, used for the initial landing experience.
  • Interactive Layout Animations — 【User Value】Makes card interactions feel fluid and satisfying, with cinematic transitions for actions like zooming in on a card or flicking through a stack. 【Design Strategy】Each layout mode has a dedicated animator object that manages state and timing for its specific interactions, triggered by user input. 【Business Logic】 - **Fan Mode Zoom**: When a user clicks on a card in the fan layout, it smoothly animates from its position in the fan to a centered, zoomed-in view. This zoom transition automatically triggers a 1-second ramp-up of the Depth of Field effect, blurring the other cards to focus attention. Clicking anywhere else returns the card to the fan. - **Stack Mode Swipe**: When a user performs a valid swipe gesture, the stack animator transitions the cards up or down with a fixed duration of 0.45 seconds to create a consistent 'flick' effect.
  • Cinematic Booster Pack Opening — 【User Value】Turns the act of getting new cards into an exciting, visually rewarding event, mimicking the anticipation of opening a physical pack. 【Design Strategy】A multi-stage animation sequence is played in the UI, running in parallel with the background loading of the new card data. A handshake mechanism synchronizes the final reveal, ensuring a smooth experience even on slow networks. 【Business Logic】 - Step 1: User selects a booster pack from a modal. - Step 2: The system simultaneously starts loading the new card set data in the background and begins a series of timed CSS animations in the foreground (e.g., the selected pack slides to the center, shakes, then bursts open). - Step 3: Once the visual animation completes, it sends a signal to the data loading process. - Step 4: If data loading is also complete, the new cards are immediately revealed (e.g., cascading into a fan). If the data is still loading, a spinner is shown until the cards are ready, at which point the reveal happens automatically.
  • Card Search and URL Sharing — 【User Value】Allows users to easily find specific cards within a set and share a direct link to any card they are viewing, enabling social sharing and bookmarking. 【Design Strategy】Implement a simple text search that filters the current set's catalog and update the browser's URL in real-time to reflect the currently viewed set and card. 【Business Logic】 - **Search**: A search input filters the card catalog based on a substring match. To maintain performance, search results are capped at the first 20 matches. - **URL State**: When a user navigates to a card, the URL is updated with query parameters (e.g., `?set=sv3_en&card=0257`). When a user visits a URL with these parameters, the application automatically loads and displays that specific card. - **Sharing**: A 'Share' button constructs the current card's URL and uses the browser's native Web Share API if available, with a fallback to copying the URL to the clipboard.

Asset & Catalog Management

This module is the data backbone of the application, responsible for fetching card set definitions, processing card metadata, and loading all necessary image assets (card art, foil masks, etc.) from a Content Delivery Network (CDN). It includes logic for selecting the best foil variant for a card, mapping rarities to shader styles, and managing a GPU texture cache to optimize performance and memory usage.

  • Dynamic Card Catalog Loading — 【User Value】Allows new Pokémon TCG sets to be added to the app without requiring a code update, simply by adding a new JSON file to the asset server. 【Design Strategy】A central registry lists available card sets. When a set is selected, its corresponding JSON definition file is fetched from the CDN, processed into a standardized catalog format, and cached to avoid re-fetching. 【Business Logic】 - Step 1: User selects a set (e.g., 'Temporal Forces'). - Step 2: The system finds the corresponding JSON file path in its registry and fetches it from the CDN. - Step 3: The raw JSON data is processed: cards are filtered, grouped by collector number, and the best foil variant is chosen for each based on a priority system. - Step 4: The processed data is used to build the final card catalog, which includes resolved texture URLs and the assigned holographic shader style for each card. - Step 5: This catalog is stored in a reactive state, triggering the UI and 3D scene to update.
  • GPU Texture Caching and Lifecycle — 【User Value】Improves performance and reduces network usage by avoiding re-downloading card images that have already been viewed. Prevents memory leaks by properly releasing GPU resources when they are no longer needed. 【Design Strategy】A dedicated 'card loader' service maintains a cache of all loaded textures. It handles loading new textures on demand and provides a mechanism to clear the cache and dispose of GPU memory when switching sets. 【Business Logic】 - Step 1: When a card needs to be displayed, the system requests its textures (front, mask, foil) from the card loader. - Step 2: The loader first checks its internal cache. If the textures are present, it returns them immediately. - Step 3: If not cached, the loader fetches the images from the CDN, converts them into GPU textures, applies rendering settings (e.g., filtering), and stores them in the cache before returning them. - Step 4: When the user switches to a different card set, a `clearCache` function is called, which iterates through the cached textures, calls a `dispose()` method on each to free GPU memory, and clears the cache map.
  • Asset Loading Failure Handling — 【User Value】Prevents the application from crashing if a card image fails to load. Informs the user about the issue via a non-intrusive notification, allowing the rest of the app to remain functional. 【Design Strategy】All texture loading operations are wrapped with error handling. On failure, a user-facing toast notification is displayed, and the process continues gracefully, allowing a card to render partially (e.g., without its holo effect) if possible. 【Business Logic】 - Step 1: An attempt is made to load a texture from the CDN. - Step 2: If the network request fails (e.g., 404 Not Found), the error is caught. - Step 3: A warning is logged to the developer console for debugging purposes. - Step 4: A toast notification is triggered with a user-friendly message, such as 'Failed to load card asset: Charizard VMAX'. This toast automatically disappears after 5 seconds. - Step 5: The loading process continues with the remaining assets, preventing the entire card or scene from failing to render due to a single missing image.

Developer Experience & Deployment

This module focuses on the project's non-functional quality attributes, providing a robust infrastructure for development, testing, deployment, and performance monitoring. It ensures code quality through automated checks and offers a streamlined process for deploying the application as a secure, optimized, and offline-capable Progressive Web App (PWA).

  • Dockerized Production Deployment — 【User Value】Provides a reliable and reproducible way to deploy the application in a production environment, complete with security best practices and performance optimizations. 【Design Strategy】A multi-stage Dockerfile is used to first build the optimized static assets and then package them into a minimal, hardened Nginx web server image. 【Business Logic】 - **Build Stage**: Uses a Node.js environment to install dependencies and run the production build script, creating optimized JS/CSS bundles. - **Runtime Stage**: Copies the built assets into a lightweight Nginx container. The Nginx configuration is customized to include: - **Security Headers**: Strict Content-Security-Policy (CSP), HSTS, and restrictions on camera/microphone permissions. - **Performance**: Gzip compression and rate limiting (30 requests/second) to handle traffic spikes. - **SPA Routing**: A fallback rule ensures that all navigation requests are directed to `index.html`, which is necessary for a single-page application. - **Health Checks**: An internal `/health` endpoint for orchestration systems.
  • Comprehensive Automated Testing — 【User Value】Ensures application stability and correctness by automatically verifying logic, rendering, and user flows across different browsers before changes are deployed. 【Design Strategy】A multi-layered testing strategy is employed, using different tools to test different aspects of the application. 【Business Logic】 - **Unit & Shader Tests (Vitest)**: These tests run in a headless environment. A mock WebGL context is used to validate that GLSL shaders compile correctly and that the data binding between the application logic and the shaders is working as expected. - **End-to-End Tests (Playwright)**: These tests launch real browsers (Chromium, Firefox, WebKit) and simulate user interactions like clicking buttons, navigating carousels, and opening modals. They validate entire user flows and use a special debug-bridge API exposed by the application to inspect the state of the 3D scene (e.g., verifying that 9 card meshes are present in carousel mode).
  • Opt-In Performance Observability — 【User Value】Allows developers and operators to diagnose performance issues in production by capturing detailed timing data for asset loading and page navigation, without adding overhead for regular users. 【Design Strategy】Integrate the OpenTelemetry standard for distributed tracing, but activate it only if a specific environment variable is configured during the build process. 【Business Logic】 - Step 1: The application build process checks for a `VITE_OTEL_COLLECTOR_URL` environment variable. - Step 2: If the variable is present, the OpenTelemetry SDK is initialized. It automatically instruments browser `fetch` requests and initial page loads to capture timing data. - Step 3: Custom 'spans' (timed operations) are created for application-specific logic, like loading a set of card textures or rebuilding the 3D scene. - Step 4: All this timing data is sent to the configured collector URL for analysis. If the environment variable is not set, the entire system is disabled, resulting in zero performance overhead.

Core Technical Capabilities

Motion-Driven Real-time Holographic Rendering

Problem: How to replicate the complex, angle-dependent light play of a physical holographic card in a web browser, making it feel truly interactive and not like a pre-rendered video?

Solution: A sophisticated rendering pipeline was designed to translate user movement into realistic light effects. - Step 1: An 'eye position' is calculated from user input (webcam, gyro, etc.). - Step 2: This position is used to create an off-axis camera projection, which gives the scene a natural 3D parallax effect. - Step 3: For each rendered card, the eye position is converted into a normalized 2D 'pointer' vector relative to the card's surface. This vector represents the viewing angle. - Step 4: This 'pointer' vector is passed as a 'uniform' variable to a custom GLSL fragment shader. - Step 5: The shader uses this vector to mathematically manipulate texture lookups and color calculations in real-time. This simulates the effect of light shifting across the card's foil patterns, changing colors, and creating specular highlights, all dynamically driven by the user's movement.

Technologies: WebGL, GLSL Shaders, Off-axis Projection, Vector Math

Boundaries & Risks: This approach is computationally intensive and relies on the user having a decent GPU for smooth performance. Creating new, authentic-looking shader effects requires specialized GLSL programming and artistic skill. The effect's quality is highly dependent on the quality of the input 'foil mask' textures.

Unified Multi-Modal Input System

Problem: How to provide a natural and device-appropriate way for users to control the 3D scene, whether they are on a desktop with a webcam, a laptop with a mouse, or a mobile phone?

Solution: An abstraction layer was created to unify various input sources into a single, consistent signal for the rendering engine. - Step 1: Separate, self-contained modules ('composables') are responsible for handling a single input type (e.g., `useFaceTracking`, `useGyroscope`, `useMouseTilt`). - Step 2: Each module processes its raw hardware input. For example, the face tracking module uses MediaPipe to get head coordinates, while the gyroscope module normalizes sensor data and applies spring physics for smoothing. - Step 3: All modules translate their processed input into a standardized `targetEye` (x,y,z) coordinate and update this value in a central application state store. - Step 4: The 3D rendering engine is completely decoupled from the input source; it simply reads the `targetEye` position from the state store on every frame to drive the camera. This design makes it easy to add new input methods in the future and ensures consistent behavior across all devices.

Technologies: MediaPipe, DeviceOrientationEvent API, Pointer Events, State Management

Boundaries & Risks: The system relies on browser APIs that require explicit user permission (camera, gyroscope), which can be a point of friction. The accuracy of head tracking is dependent on the MediaPipe model and factors like lighting conditions. Combining multiple simultaneous inputs (e.g., keyboard and face tracking) can lead to conflicting state updates if not arbitrated correctly.

Performance-Conscious Scene Management

Problem: How to render a complex 3D scene with high-resolution textures and multiple shader effects, and allow users to switch between views, without causing lag, stuttering, or memory leaks?

Solution: A combination of optimization strategies are used to manage performance and memory. - Step 1: **Selective Rebuilds**: The system differentiates between configuration changes that require a full scene reconstruction (e.g., changing screen dimensions) and those that only require rebuilding the cards. This avoids unnecessarily destroying and recreating elements like the scene's lighting and background. - Step 2: **Lazy Shader Activation**: In layouts with many cards (like 'Fan' mode), cards are initially rendered with a cheap, basic material. The full, expensive holographic shader is only compiled and swapped in when a user actually interacts with a card (e.g., by hovering over it), deferring the performance cost. - Step 3: **Parallelized Loading**: Visual animations (like the booster pack opening) are run in parallel with asynchronous data and texture loading. This hides network latency and makes the application feel more responsive. - Step 4: **Explicit GPU Memory Management**: The system keeps track of all loaded textures and explicitly calls `dispose()` to free GPU memory when they are no longer needed (e.g., when switching to a new card set), preventing memory leaks.

Technologies: WebGL resource management, Lazy Loading, Asynchronous Programming, State-driven rebuilds

Boundaries & Risks: The cache management is basic and does not use an LRU (Least Recently Used) policy, so memory could grow in a very long session. Full scene rebuilds, though minimized, can still cause a brief flicker or freeze. The logic for deciding what to rebuild is complex and can be a source of bugs.

Related Projects

Discover more public DeepDive reports to compare architecture decisions.

  • usememos/memosc4176b4ef1c1 - en
  • imanian/appointmate66f1c0a89b98 - en
  • bytedance/UI-TARS-desktop3f254968e627 - en
  • calcom/cal.com0b0a5478fb39 - en
  • ItzCrazyKns/Perplexicad7b020e5bb64 - en
  • openclaw/openclaw74fbbda2833e - en
Browse all public DeepDive reports