Request For Comments (RFCs)
  • Request for comments (RFC)
  • RFC 001: Matcher architecture
  • RFC 002: Archival Storage Service
  • RFC 003: Asset Access
  • RFC 004: METS Adapter
  • RFC 005: Reporting Pipeline
  • RFC 006: Reindexer architecture
  • RFC 007: Goobi Upload
  • RFC 008: API Filtering
  • RFC 009: AWS account setup
  • RFC 010: Data model
  • RFC 011: Network Architecture
  • RFC 012: API Architecture
  • RFC 013: Release & Deployment tracking
    • Deployment example
    • Version 1
  • RFC 014: Born digital workflow
  • RFC 015: How we work
    • Code Reviews
    • Shared Libraries
  • RFC 016: Holdings service
  • RFC 017: URL Design
  • RFC 018: Pipeline Tracing
  • RFC 019: Platform Reliability
    • CI/CD
    • Observability
    • Reliability
  • RFC 020: Locations and requesting
  • RFC 021: Data science in the pipeline
  • RFC 022: Logging
    • Logging example
  • RFC 023: Images endpoint
  • RFC 024: Library management
  • RFC 025: Tagging our Terraform resources
  • RFC 026: Relevance reporting service
  • RFC 026: Relation Embedder
  • RFC 027: Pipeline Intermediate Storage
  • RFC 029: Work state modelling
  • RFC 030: Pipeline merging
  • RFC 031: Relation Batcher
  • RFC 032: Calm deletion watcher
  • RFC 033: Api internal model versioning
  • RFC 034: Modelling Locations in the Catalogue API
  • RFC 035: Modelling MARC 856 "web linking entry"
  • RFC 036: Modelling holdings records
  • RFC 037: API faceting principles & expectations
  • RFC 038: Matcher versioning
  • RFC 039: Requesting API design
  • RFC 040: TEI Adapter
  • RFC 041: Tracking changes to the Miro data
  • RFC 042: Requesting model
  • RFC 043: Removing deleted records from (re)indexes
  • RFC 044: Tracking Patron Deletions
  • RFC 045: Work relationships in Sierra, part 2
    • Work relationships in Sierra
  • RFC 046: Born Digital in IIIF
  • RFC 047: Changing the structure of the Catalogue API index
  • RFC 048: Concepts work plan
  • RFC 049: Changing how aggregations are retrieved by the Catalogue API
  • RFC 050: Design considerations for the concepts API
  • 051-concepts-adapters
  • RFC 052: The Concepts Pipeline - phase one
  • RFC 053: Logging in Lambdas
  • RFC 054: Authoritative ids with multiple Canonical ids.
  • RFC 055: Genres as Concepts
  • RFC 056: Prismic to Elasticsearch ETL pipeline
  • RFC 058: Relevance testing
    • Examples of rank CLI usage
  • RFC 059: Splitting the catalogue pipeline Terraform
  • RFC 060: Service health-check principles
  • RFC 061: Content API next steps
  • RFC 062: Content API: All search and indexing of addressable content types
  • RFC 062: Wellcome Collection Graph overview and next steps
  • RFC 063: Catalogue Pipeline services from ECS to Lambda
  • RFC 064: Graph data model
  • RFC 065: Library Data Link Explorer
  • RFC 066: Catalogue Graph pipeline
  • RFC 067: Prismic API ID casing
  • RFC 068: Exhibitions in Content API
  • RFC 069: Catalogue Graph Ingestor
  • RFC 070: Concepts API changes
  • RFC 071: Python Building and Deployment
    • The current state
  • RFC 072: Transitive Sierra hierarchies
  • RFC 073: Content API
    • Content API: articles endpoint
    • Content API: Events endpoint
    • Content API: exhibitions endpoint
    • The future of this endpoint
  • RFC 074: Offsite requesting
    • Sierra locations in the Catalogue API
  • RFC 075: Using Apache Iceberg tables in Catalogue Pipeline adapters
Powered by GitBook
On this page
  • Background
  • Problem statment
  • The current estate
  • Network infrastructure

RFC 011: Network Architecture

PreviousRFC 010: Data modelNextRFC 012: API Architecture

Last updated 10 days ago

This RFC proposes a network architecture for Wellcome Collection services, ensuring effective security, maintenance, and scalability as the number of services grows.

Last modified: 2019-10-16T16:33:39+01:00

Background

As the number of Wellcome Collection services grows and integrates with 3rd parties, good practise with respect to network infrastructure is required for effective security, maintenance and scalability.

Problem statment

There are currently a number of products looked after by the Digital Platform team. The estate has grown organically and the underlying architecture reflects that growth rather than best practise in all cases.

There are a number of development teams in addition to the Digital Platform team who require access to do software development and operations work.

The current estate

The services we are currently resonsible for are as follows:

  • Workflow:

    • : digitisation workflow services, hosted on-site at Wellcome and in AWS. Maintained by Intranda.

    • : born-digital workflow service, hosted in AWS.

  • Catalogue:

    • : Wellcome catalogue API, hosted in AWS & with ElasticCloud.

    • : Wellcome catalogue ingest pipeline, hosted in AWS.

    • : large Wellcome catalogue datasets, hosted in AWS.

    • Catalogue Adapter services: hosted in AWS, syncing data from services hosted on-site and AWS.

    • IIIF Services:

      • : hosted in AWS.

      • : hosted in AWS. Maintained by digirati.

  • Storage:

    • : long term immutable data storage with audit trail, hosted in AWS, communicating with on-site & AWS hosted services.

  • Data science:

    • Managed notebooks & storage: hosted privately in AWS.

    • : a collection of public experiments, hosted in AWS.

  • Monitoring:

    • : Grafana service with visibility on AWS accounts, osted in AWS.

    • : Dashboard providing visibility on ECS services and deployments.

Network infrastructure

Networks should be split along project lines, using a consistent IP CIDR scheme that is non-overlapping with other Wellcome infrastructure. Network access to 3rd parties should be made available via a transit VPC.

  • transit-10-90-4-0-23: 10.90.4.0/23 - Transit VPC: IP range within Wellcome internal network, contains VPN connection to Wellcome via internal firewall. (owned by Platform AWS account)

  • storage-172-30-0-0-16: 172.30.0.0/16 - Storage service infrastructure (owned by Storage AWS account)

  • monitoring-172-28-0-0-16: 172.28.0.0/16 - Monitoring infrastructure (owned by Platform AWS account)

  • datascience-172-27-0-0-16: 172.26.0.0/16 - Data science infrastructure & Labs (owned by Collection Data AWS account)

  • catalogue-172-31-0-0-16: 172.31.0.0/16 - Catalogue service infrastructure (owned by Catalogue AWS account)

Older VPCs:

  • wellcomecollection: 172.20.0.0/16 - wellcomecollection.org infra

  • workflow: 10.50.0.0/16 - Workflow infrastructure

The default VPC has been removed.

Goobi
Archivematica
Catalogue API
Catalogue Pipeline
Data API
Image API
Presentation API
Archival storage service
Labs
Grafana
ECS Dashboard