Skip to content

sopa.io

Notes

Due to many updates in the data format provided by the different companies, you might have issues loading your data. In this case, consider opening an issue detailing the version of the machine you used and the error log, as well as an example of file names that you are trying to read.

Related to spatialdata-io

A library called spatialdata-io already contains a lot of readers. Here, we updated some readers already existing in spatialdata-io, and added a few others. In the future, we will completely rely on spatialdata-io.

Readers

sopa.io.xenium(path, image_models_kwargs=None, imread_kwargs=None)

Read Xenium data as a SpatialData object. For more information, refer to spatialdata-io.

This function reads the following files
  • transcripts.parquet: transcripts locations and names
  • experiment.xenium: metadata file
  • morphology_focus.ome.tif: morphology image (or a directory, for recent versions of the Xenium)

Parameters:

Name Type Description Default
path str | Path

Path to the Xenium directory containing all the experiment files

required
image_models_kwargs dict | None

Keyword arguments passed to spatialdata.models.Image2DModel.

None
imread_kwargs dict | None

Keyword arguments passed to dask_image.imread.imread.

None

Returns:

Type Description
SpatialData

A SpatialData object representing the Xenium experiment

Source code in sopa/io/reader/xenium.py
def xenium(
    path: str | Path,
    image_models_kwargs: dict | None = None,
    imread_kwargs: dict | None = None,
) -> SpatialData:
    """Read Xenium data as a `SpatialData` object. For more information, refer to [spatialdata-io](https://spatialdata.scverse.org/projects/io/en/latest/generated/spatialdata_io.xenium.html).

    This function reads the following files:
        - `transcripts.parquet`: transcripts locations and names
        - `experiment.xenium`: metadata file
        - `morphology_focus.ome.tif`: morphology image (or a directory, for recent versions of the Xenium)


    Args:
        path: Path to the Xenium directory containing all the experiment files
        image_models_kwargs: Keyword arguments passed to `spatialdata.models.Image2DModel`.
        imread_kwargs: Keyword arguments passed to `dask_image.imread.imread`.

    Returns:
        A `SpatialData` object representing the Xenium experiment
    """
    path = Path(path)
    image_models_kwargs, imread_kwargs = _default_image_kwargs(image_models_kwargs, imread_kwargs)
    image_models_kwargs["c_coords"] = [XeniumKeys.MORPHOLOGY_FOCUS_CHANNEL_0.value]

    with open(path / XeniumKeys.XENIUM_SPECS) as f:
        specs = json.load(f)

    points = {"transcripts": _get_points(path, specs)}

    with open(path / XeniumKeys.XENIUM_SPECS) as f:
        specs = json.load(f)
    # to trigger the warning if the version cannot be parsed
    version = _parse_version_of_xenium_analyzer(specs, hide_warning=False)

    images = {}
    if version is None or version < packaging.version.parse("2.0.0"):
        images["morphology_focus"] = _get_images(
            path,
            XeniumKeys.MORPHOLOGY_FOCUS_FILE,
            imread_kwargs,
            image_models_kwargs,
        )
    else:
        morphology_focus_dir = path / XeniumKeys.MORPHOLOGY_FOCUS_DIR
        files = [
            morphology_focus_dir / XeniumKeys.MORPHOLOGY_FOCUS_CHANNEL_IMAGE.value.format(i)
            for i in range(4)
        ]
        files = [f for f in files if f.exists()]
        if len(files) not in [1, 4]:
            raise ValueError(
                "Expected 1 (no segmentation kit) or 4 (segmentation kit) files in the morphology focus directory, "
                f"found {len(files)}: {files}"
            )

        if len(files) == 4:
            image_models_kwargs["c_coords"] = [
                XeniumKeys.MORPHOLOGY_FOCUS_CHANNEL_0,
                XeniumKeys.MORPHOLOGY_FOCUS_CHANNEL_1,
                XeniumKeys.MORPHOLOGY_FOCUS_CHANNEL_2,
                XeniumKeys.MORPHOLOGY_FOCUS_CHANNEL_3,
            ]

        images["morphology_focus"] = _get_images(
            morphology_focus_dir,
            files,
            imread_kwargs,
            image_models_kwargs,
        )

    return SpatialData(images=images, points=points)

sopa.io.merscope(path, backend=None, z_layers=3, region_name=None, slide_name=None, image_models_kwargs=None, imread_kwargs=None)

Read MERSCOPE data as a SpatialData object.

This function reads the following files
  • detected_transcripts.csv: transcripts locations and names
  • all the images under the images directory
  • images/micron_to_mosaic_pixel_transform.csv: affine transformation

Parameters:

Name Type Description Default
path str | Path

Path to the MERSCOPE directory containing all the experiment files

required
backend str

Either "dask_image" or "rioxarray" (the latter uses less RAM). By default, uses "rioxarray" if and only if the rioxarray library is installed.

None
z_layers int | list[int] | None

Indices of the z-layers to consider. Either one int index, or a list of int indices. If None, then no image is loaded. By default, only the middle layer is considered (that is, layer 3).

3
region_name str | None

Name of the region of interest, e.g., 'region_0'. If None then the name of the path directory is used.

None
slide_name str | None

Name of the slide/run. If None then the name of the parent directory of path is used (whose name starts with a date).

None
image_models_kwargs dict | None

Keyword arguments passed to spatialdata.models.Image2DModel.

None
imread_kwargs dict | None

Keyword arguments passed to dask_image.imread.imread.

None

Returns:

Type Description
SpatialData

A SpatialData object representing the MERSCOPE experiment

Source code in sopa/io/reader/merscope.py
def merscope(
    path: str | Path,
    backend: str = None,
    z_layers: int | list[int] | None = 3,
    region_name: str | None = None,
    slide_name: str | None = None,
    image_models_kwargs: dict | None = None,
    imread_kwargs: dict | None = None,
) -> SpatialData:
    """Read MERSCOPE data as a `SpatialData` object.

    This function reads the following files:
        - `detected_transcripts.csv`: transcripts locations and names
        - all the images under the `images` directory
        - `images/micron_to_mosaic_pixel_transform.csv`: affine transformation

    Args:
        path: Path to the MERSCOPE directory containing all the experiment files
        backend: Either `"dask_image"` or `"rioxarray"` (the latter uses less RAM). By default, uses `"rioxarray"` if and only if the `rioxarray` library is installed.
        z_layers: Indices of the z-layers to consider. Either one `int` index, or a list of `int` indices. If `None`, then no image is loaded. By default, only the middle layer is considered (that is, layer 3).
        region_name: Name of the region of interest, e.g., `'region_0'`. If `None` then the name of the `path` directory is used.
        slide_name: Name of the slide/run. If `None` then the name of the parent directory of `path` is used (whose name starts with a date).
        image_models_kwargs: Keyword arguments passed to `spatialdata.models.Image2DModel`.
        imread_kwargs: Keyword arguments passed to `dask_image.imread.imread`.

    Returns:
        A `SpatialData` object representing the MERSCOPE experiment
    """
    assert (
        backend is None or backend in SUPPORTED_BACKENDS
    ), f"Backend '{backend} not supported. Should be one of: {', '.join(SUPPORTED_BACKENDS)}"

    path = Path(path).absolute()
    image_models_kwargs, imread_kwargs = _default_image_kwargs(image_models_kwargs, imread_kwargs)

    images_dir = path / MerscopeKeys.IMAGES_DIR

    microns_to_pixels = Affine(
        np.genfromtxt(images_dir / MerscopeKeys.TRANSFORMATION_FILE),
        input_axes=("x", "y"),
        output_axes=("x", "y"),
    )

    vizgen_region = path.name if region_name is None else region_name
    slide_name = path.parent.name if slide_name is None else slide_name
    dataset_id = f"{slide_name}_{vizgen_region}"

    # Images
    images = {}

    z_layers = [z_layers] if isinstance(z_layers, int) else z_layers or []

    stainings = _get_channel_names(images_dir)
    image_transformations = {"microns": microns_to_pixels.inverse()}

    reader = _get_reader(backend)

    if stainings:
        for z_layer in z_layers:
            images[f"{dataset_id}_z{z_layer}"] = reader(
                images_dir,
                stainings,
                z_layer,
                image_models_kwargs,
                image_transformations,
                **imread_kwargs,
            )

    # Transcripts
    points = {}
    transcript_path = path / MerscopeKeys.TRANSCRIPTS_FILE
    if transcript_path.exists():
        points[f"{dataset_id}_transcripts"] = _get_points(transcript_path)
    else:
        logger.warning(
            f"Transcript file {transcript_path} does not exist. Transcripts are not loaded."
        )

    return SpatialData(points=points, images=images)

sopa.io.cosmx(path, dataset_id=None, fov=None, read_proteins=False, image_models_kwargs=None, imread_kwargs=None)

Read Cosmx Nanostring data. The fields of view are stitched together, except if fov is provided.

This function reads the following files
  • *_fov_positions_file.csv or *_fov_positions_file.csv.gz: FOV locations
  • Morphology2D directory: all the FOVs morphology images
  • Morphology_ChannelID_Dictionary.txt: Morphology channels names
  • *_tx_file.csv.gz or *_tx_file.csv: Transcripts location and names
  • If read_proteins is True, all the images under the nested ProteinImages directories will be read

Parameters:

Name Type Description Default
path str | Path

Path to the root directory containing Nanostring files.

required
dataset_id Optional[str]

Optional name of the dataset (needs to be provided if not infered).

None
fov int | str | None

Name or number of one single field of view to be read. If a string is provided, an example of correct syntax is "F008". By default, reads all FOVs.

None
read_proteins bool

Whether to read the proteins or the transcripts.

False
image_models_kwargs dict | None

Keyword arguments passed to spatialdata.models.Image2DModel.

None
imread_kwargs dict | None

Keyword arguments passed to dask_image.imread.imread.

None

Returns:

Type Description
SpatialData

A SpatialData object representing the CosMX experiment

Source code in sopa/io/reader/cosmx.py
def cosmx(
    path: str | Path,
    dataset_id: Optional[str] = None,
    fov: int | str | None = None,
    read_proteins: bool = False,
    image_models_kwargs: dict | None = None,
    imread_kwargs: dict | None = None,
) -> SpatialData:
    """
    Read *Cosmx Nanostring* data. The fields of view are stitched together, except if `fov` is provided.

    This function reads the following files:
        - `*_fov_positions_file.csv` or `*_fov_positions_file.csv.gz`: FOV locations
        - `Morphology2D` directory: all the FOVs morphology images
        - `Morphology_ChannelID_Dictionary.txt`: Morphology channels names
        - `*_tx_file.csv.gz` or `*_tx_file.csv`: Transcripts location and names
        - If `read_proteins` is `True`, all the images under the nested `ProteinImages` directories will be read

    Args:
        path: Path to the root directory containing *Nanostring* files.
        dataset_id: Optional name of the dataset (needs to be provided if not infered).
        fov: Name or number of one single field of view to be read. If a string is provided, an example of correct syntax is "F008". By default, reads all FOVs.
        read_proteins: Whether to read the proteins or the transcripts.
        image_models_kwargs: Keyword arguments passed to `spatialdata.models.Image2DModel`.
        imread_kwargs: Keyword arguments passed to `dask_image.imread.imread`.

    Returns:
        A `SpatialData` object representing the CosMX experiment
    """
    path = Path(path)
    image_models_kwargs, imread_kwargs = _default_image_kwargs(image_models_kwargs, imread_kwargs)

    dataset_id = _infer_dataset_id(path, dataset_id)
    fov_locs = _read_fov_locs(path, dataset_id)
    fov_id, fov = _check_fov_id(fov)

    protein_dir_dict = {}
    if read_proteins:
        protein_dir_dict = {
            int(protein_dir.parent.name[3:]): protein_dir
            for protein_dir in list(path.rglob("**/FOV*/ProteinImages"))
        }
        assert len(protein_dir_dict), f"No directory called 'ProteinImages' was found under {path}"

    ### Read image(s)
    images_dir = _find_dir(path, "Morphology2D")
    morphology_coords = _cosmx_morphology_coords(path)

    if fov is None:
        image, c_coords = _read_stitched_image(
            images_dir,
            fov_locs,
            protein_dir_dict,
            morphology_coords,
            **imread_kwargs,
        )
        image_name = "stitched_image"
    else:
        pattern = f"*{fov_id}.TIF"
        fov_files = list(images_dir.rglob(pattern))

        assert len(fov_files), f"No file matches the pattern {pattern} inside {images_dir}"
        assert (
            len(fov_files) == 1
        ), f"Multiple files match the pattern {pattern}: {', '.join(fov_files)}"

        image, c_coords = _read_fov_image(
            fov_files[0], protein_dir_dict.get(fov), morphology_coords, **imread_kwargs
        )
        image_name = f"{fov}_image"

    parsed_image = Image2DModel.parse(
        image, dims=("c", "y", "x"), c_coords=c_coords, **image_models_kwargs
    )

    if read_proteins:
        return SpatialData(images={image_name: parsed_image})

    ### Read transcripts
    transcripts_data = _read_transcripts_csv(path, dataset_id)

    if fov is None:
        transcripts_data["x"] = transcripts_data["x_global_px"] - fov_locs["xmin"].min()
        transcripts_data["y"] = transcripts_data["y_global_px"] - fov_locs["ymin"].min()
        coordinates = None
        points_name = "points"
    else:
        transcripts_data = transcripts_data[transcripts_data["fov"] == fov]
        coordinates = {"x": "x_local_px", "y": "y_local_px"}
        points_name = f"{fov}_points"

    transcripts = PointsModel.parse(
        transcripts_data,
        coordinates=coordinates,
        feature_key=CosmxKeys.TARGET_OF_TRANSCRIPT,
    )

    return SpatialData(images={image_name: parsed_image}, points={points_name: transcripts})

sopa.io.macsima(path, **kwargs)

Read MACSIMA data as a SpatialData object

Notes

For all dulicated name, their index will be added in brackets after, for instance you will often find DAPI (000) to indicate the DAPI channel of index 000

Parameters:

Name Type Description Default
path Path

Path to the directory containing the MACSIMA .tif images

required
kwargs int

Kwargs for _general_tif_directory_reader

{}

Returns:

Type Description
SpatialData

A SpatialData object with a 2D-image of shape (C, Y, X)

Source code in sopa/io/reader/macsima.py
def macsima(path: Path, **kwargs: int) -> SpatialData:
    """Read MACSIMA data as a `SpatialData` object

    Notes:
        For all dulicated name, their index will be added in brackets after, for instance you will often find `DAPI (000)` to indicate the DAPI channel of index `000`

    Args:
        path: Path to the directory containing the MACSIMA `.tif` images
        kwargs: Kwargs for `_general_tif_directory_reader`

    Returns:
        A `SpatialData` object with a 2D-image of shape `(C, Y, X)`
    """
    return _general_tif_directory_reader(
        path, files_to_channels=_get_channel_names_macsima, **kwargs
    )

sopa.io.phenocycler(path, channels_renaming=None, image_models_kwargs=None)

Read Phenocycler data as a SpatialData object

Parameters:

Name Type Description Default
path str | Path

Path to a .qptiff file, or a .tif file (if exported from QuPath)

required
channels_renaming dict | None

A dictionnary whose keys correspond to channels and values to their corresponding new name. Not all channels need to be renamed.

None
image_models_kwargs dict | None

Keyword arguments passed to spatialdata.models.Image2DModel.

None

Returns:

Type Description
SpatialData

A SpatialData object with a 2D-image of shape (C, Y, X)

Source code in sopa/io/reader/phenocycler.py
def phenocycler(
    path: str | Path, channels_renaming: dict | None = None, image_models_kwargs: dict | None = None
) -> SpatialData:
    """Read Phenocycler data as a `SpatialData` object

    Args:
        path: Path to a `.qptiff` file, or a `.tif` file (if exported from QuPath)
        channels_renaming: A dictionnary whose keys correspond to channels and values to their corresponding new name. Not all channels need to be renamed.
        image_models_kwargs: Keyword arguments passed to `spatialdata.models.Image2DModel`.

    Returns:
        A `SpatialData` object with a 2D-image of shape `(C, Y, X)`
    """
    image_models_kwargs, _ = _default_image_kwargs(image_models_kwargs)

    path = Path(path)
    image_name = path.absolute().stem

    if path.suffix == ".qptiff":
        with tf.TiffFile(path) as tif:
            series = tif.series[0]
            names = _get_channel_names_qptiff(series)

            delayed_image = delayed(lambda series: series.asarray())(tif)
            image = da.from_delayed(delayed_image, dtype=series.dtype, shape=series.shape)
    elif path.suffix == ".tif":
        image = imread(path)
        names = _get_IJ_channel_names(path)
    else:
        raise ValueError(f"Unsupported file extension {path.suffix}. Must be '.qptiff' or '.tif'.")

    names = _rename_channels(names, channels_renaming)
    image = image.rechunk(chunks=image_models_kwargs["chunks"])

    image = Image2DModel.parse(
        image,
        dims=("c", "y", "x"),
        transformations={"pixels": Identity()},
        c_coords=names,
        **image_models_kwargs,
    )

    return SpatialData(images={image_name: image})

sopa.io.hyperion(path, image_models_kwargs=None, imread_kwargs=None)

Read Hyperion data as a SpatialData object

Parameters:

Name Type Description Default
path Path

Path to the directory containing the Hyperion .tiff images

required
image_models_kwargs dict | None

Keyword arguments passed to spatialdata.models.Image2DModel.

None
imread_kwargs dict | None

Keyword arguments passed to dask_image.imread.imread.

None

Returns:

Type Description
SpatialData

A SpatialData object with a 2D-image of shape (C, Y, X)

Source code in sopa/io/reader/hyperion.py
def hyperion(
    path: Path, image_models_kwargs: dict | None = None, imread_kwargs: dict | None = None
) -> SpatialData:
    """Read Hyperion data as a `SpatialData` object

    Args:
        path: Path to the directory containing the Hyperion `.tiff` images
        image_models_kwargs: Keyword arguments passed to `spatialdata.models.Image2DModel`.
        imread_kwargs: Keyword arguments passed to `dask_image.imread.imread`.

    Returns:
        A `SpatialData` object with a 2D-image of shape `(C, Y, X)`
    """
    image_models_kwargs, imread_kwargs = _default_image_kwargs(image_models_kwargs, imread_kwargs)

    files = [file for file in Path(path).iterdir() if file.suffix == ".tiff"]

    names = _get_channel_names_hyperion(files)
    image = da.concatenate(
        [imread(file, **imread_kwargs) for file in files],
        axis=0,
    )
    image = (image / image.max(axis=(1, 2)).compute()[:, None, None] * 255).astype(np.uint8)
    image = image.rechunk(chunks=image_models_kwargs["chunks"])

    log.info(f"Found channel names {names}")

    image_name = Path(path).absolute().stem
    image = Image2DModel.parse(
        image,
        dims=("c", "y", "x"),
        transformations={"pixels": Identity()},
        c_coords=names,
        **image_models_kwargs,
    )

    return SpatialData(images={image_name: image})

sopa.io.aicsimageio(path, z_stack=0, image_models_kwargs=None, aics_kwargs=None)

Read an image using AICSImageIO. It supports special formats such as ND2, CZI, LIF, or DV.

Extra dependencies

To use this reader, you'll need the aicsimageio dependency (pip install aicsimageio). To read .czi images, you'll also need to install aicspylibczi (for instance pip install aicspylibczi).

Parameters:

Name Type Description Default
path Path

Path to the image file

required
z_stack int

(Only for 3D images) Index of the stack in the z-axis to use.

0
image_models_kwargs dict | None

Keyword arguments passed to spatialdata.models.Image2DModel.

None
aics_kwargs dict | None

Keyword arguments passed to aicsimageio.AICSImage.

None

Returns:

Type Description
SpatialData

A SpatialData object with a 2D-image of shape (C, Y, X)

Source code in sopa/io/reader/aics.py
def aicsimageio(
    path: Path,
    z_stack: int = 0,
    image_models_kwargs: dict | None = None,
    aics_kwargs: dict | None = None,
) -> SpatialData:
    """Read an image using [AICSImageIO](https://github.com/AllenCellModeling/aicsimageio). It supports special formats such as `ND2`, `CZI`, `LIF`, or `DV`.

    !!! note "Extra dependencies"
        To use this reader, you'll need the `aicsimageio` dependency (`pip install aicsimageio`). To read `.czi` images, you'll also need to install `aicspylibczi` (for instance `pip install aicspylibczi`).

    Args:
        path: Path to the image file
        z_stack: (Only for 3D images) Index of the stack in the z-axis to use.
        image_models_kwargs: Keyword arguments passed to `spatialdata.models.Image2DModel`.
        aics_kwargs: Keyword arguments passed to `aicsimageio.AICSImage`.

    Returns:
        A `SpatialData` object with a 2D-image of shape `(C, Y, X)`
    """
    image_models_kwargs, _ = _default_image_kwargs(image_models_kwargs, None)
    aics_kwargs = {} if aics_kwargs is None else aics_kwargs

    try:
        from aicsimageio import AICSImage
    except ImportError:
        raise ImportError(
            "You need to install aicsimageio, e.g. by running `pip install aicsimageio`"
        )

    xarr: xr.DataArray = AICSImage(path, **aics_kwargs).xarray_dask_data

    assert (
        len(xarr.coords["T"]) == 1
    ), f"Only one time dimension is supported, found {len(xarr.coords['T'])}."

    if len(xarr.coords["Z"]) > 1:
        log.info(f"3D image found, only reading {z_stack:=}")

    xarr = xarr.isel(T=0, Z=z_stack).rename({"C": "c", "Y": "y", "X": "x"})

    image = Image2DModel.parse(xarr, c_coords=xarr.coords["c"].values, **image_models_kwargs)

    return SpatialData(images={"image": image})

sopa.io.ome_tif(path, as_image=False)

Read an .ome.tif image. This image should be a 2D image (with possibly multiple channels). Typically, this function can be used to open Xenium IF images.

Parameters:

Name Type Description Default
path Path

Path to the .ome.tif image

required
as_image bool

If True, will return a SpatialImage object

False

Returns:

Type Description
SpatialImage | SpatialData

A SpatialImage or a SpatialData object

Source code in sopa/io/reader/utils.py
def ome_tif(path: Path, as_image: bool = False) -> SpatialImage | SpatialData:
    """Read an `.ome.tif` image. This image should be a 2D image (with possibly multiple channels).
    Typically, this function can be used to open Xenium IF images.

    Args:
        path: Path to the `.ome.tif` image
        as_image: If `True`, will return a `SpatialImage` object

    Returns:
        A `SpatialImage` or a `SpatialData` object
    """
    image_models_kwargs, _ = _default_image_kwargs()
    image_name = Path(path).absolute().name.split(".")[0]
    image: da.Array = imread(path)

    if image.ndim == 4:
        assert image.shape[0] == 1, "4D images not supported"
        image = da.moveaxis(image[0], 2, 0)
        log.info(f"Transformed 4D image into a 3D image of shape (c, y, x) = {image.shape}")
    elif image.ndim != 3:
        raise ValueError(f"Number of dimensions not supported: {image.ndim}")

    image = image.rechunk(chunks=image_models_kwargs["chunks"])

    channel_names = _ome_channels_names(path)
    if len(channel_names) != len(image):
        channel_names = [str(i) for i in range(len(image))]
        log.warn(f"Channel names couldn't be read. Using {channel_names} instead.")

    image = SpatialImage(image, dims=["c", "y", "x"], name=image_name, coords={"c": channel_names})

    if as_image:
        return image

    image = Image2DModel.parse(
        image,
        dims=("c", "y", "x"),
        c_coords=channel_names,
        transformations={"pixels": Identity()},
        **image_models_kwargs,
    )

    return SpatialData(images={image_name: image})

sopa.io.wsi(path, chunks=(3, 256, 256), as_image=False, backend='tiffslide')

Read a WSI into a SpatialData object

Parameters:

Name Type Description Default
path str | Path

Path to the WSI

required
chunks tuple[int, int, int]

Tuple representing the chunksize for the dimensions (C, Y, X).

(3, 256, 256)
as_image bool

If True, returns a, image instead of a SpatialData object

False
backend str

The library to use as a backend in order to load the WSI. One of: "openslide", "tiffslide".

'tiffslide'

Returns:

Type Description
SpatialData

A SpatialData object with a multiscale 2D-image of shape (C, Y, X)

Source code in sopa/io/reader/wsi.py
def wsi(
    path: str | Path,
    chunks: tuple[int, int, int] = (3, 256, 256),
    as_image: bool = False,
    backend: str = "tiffslide",
) -> SpatialData:
    """Read a WSI into a `SpatialData` object

    Args:
        path: Path to the WSI
        chunks: Tuple representing the chunksize for the dimensions `(C, Y, X)`.
        as_image: If `True`, returns a, image instead of a `SpatialData` object
        backend: The library to use as a backend in order to load the WSI. One of: `"openslide"`, `"tiffslide"`.

    Returns:
        A `SpatialData` object with a multiscale 2D-image of shape `(C, Y, X)`
    """
    image_name, img, slide, slide_metadata = _open_wsi(path, backend=backend)

    images = {}
    for level, key in enumerate(list(img.keys())):
        suffix = key if key != "0" else ""

        scale_image = SpatialImage(
            img[key].transpose("S", f"Y{suffix}", f"X{suffix}"),
            dims=("c", "y", "x"),
        ).chunk(chunks)

        scale_factor = slide.level_downsamples[level]

        scale_image = Image2DModel.parse(
            scale_image[:3, :, :],
            transformations={"pixels": _get_scale_transformation(scale_factor)},
            c_coords=("r", "g", "b"),
        )
        scale_image.coords["y"] = scale_factor * scale_image.coords["y"]
        scale_image.coords["x"] = scale_factor * scale_image.coords["x"]

        images[f"scale{key}"] = scale_image

    multiscale_image = MultiscaleSpatialImage.from_dict(images)
    sdata = SpatialData(images={image_name: multiscale_image})
    sdata[image_name].attrs["metadata"] = slide_metadata
    sdata[image_name].attrs["backend"] = backend
    sdata[image_name].name = image_name

    if as_image:
        return multiscale_image

    return sdata

sopa.io.uniform(*_, length=2048, cell_density=0.0001, n_points_per_cell=100, c_coords=['DAPI', 'CK', 'CD3', 'CD20'], genes=['EPCAM', 'CD3E', 'CD20', 'CXCL4', 'CXCL10'], sigma_factor=0.05, pixel_size=0.1, seed=0, include_vertices=False, include_image=True, apply_blur=True, as_output=False)

Generate a dummy dataset composed of cells generated uniformly in a square. It also has transcripts.

Parameters:

Name Type Description Default
length int

Size of the square, in pixels

2048
cell_density float

Density of cells per pixel^2

0.0001
n_points_per_cell int

Mean number of transcripts per cell

100
c_coords list[str]

Channel names

['DAPI', 'CK', 'CD3', 'CD20']
genes int | list[str]

Number of different genes, or list of gene names

['EPCAM', 'CD3E', 'CD20', 'CXCL4', 'CXCL10']
sigma_factor float

Factor used to determine sigma for the gaussian blur.

0.05
pixel_size float

Number of microns in one pixel.

0.1
seed int

Numpy random seed

0
include_vertices bool

Whether to include the vertices of the cells (as points) in the spatialdata object

False
include_image bool

Whether to include the image in the spatialdata object

True
apply_blur bool

Whether to apply gaussian blur on the image (without blur, cells are just one pixel)

True
as_output bool

If True, the data will have the same format than an output of Sopa

False

Returns:

Type Description
SpatialData

A SpatialData object with a 2D image (sdata["image"]), the cells polygon boundaries (sdata["cells"]), the transcripts (sdata["transcripts"]), and optional cell vertices (sdata["vertices"]) if include_vertices is True.

Source code in sopa/utils/data.py
def uniform(
    *_,
    length: int = 2_048,
    cell_density: float = 1e-4,
    n_points_per_cell: int = 100,
    c_coords: list[str] = ["DAPI", "CK", "CD3", "CD20"],
    genes: int | list[str] = ["EPCAM", "CD3E", "CD20", "CXCL4", "CXCL10"],
    sigma_factor: float = 0.05,
    pixel_size: float = 0.1,
    seed: int = 0,
    include_vertices: bool = False,
    include_image: bool = True,
    apply_blur: bool = True,
    as_output: bool = False,
) -> SpatialData:
    """Generate a dummy dataset composed of cells generated uniformly in a square. It also has transcripts.

    Args:
        length: Size of the square, in pixels
        cell_density: Density of cells per pixel^2
        n_points_per_cell: Mean number of transcripts per cell
        c_coords: Channel names
        genes: Number of different genes, or list of gene names
        sigma_factor: Factor used to determine `sigma` for the gaussian blur.
        pixel_size: Number of microns in one pixel.
        seed: Numpy random seed
        include_vertices: Whether to include the vertices of the cells (as points) in the spatialdata object
        include_image: Whether to include the image in the spatialdata object
        apply_blur: Whether to apply gaussian blur on the image (without blur, cells are just one pixel)
        as_output: If `True`, the data will have the same format than an output of Sopa

    Returns:
        A SpatialData object with a 2D image (`sdata["image"]`), the cells polygon boundaries (`sdata["cells"]`), the transcripts (`sdata["transcripts"]`), and optional cell vertices (`sdata["vertices"]`) if `include_vertices` is `True`.
    """
    np.random.seed(seed)

    grid_width = max(1, int(length * np.sqrt(cell_density)))
    dx = length / grid_width
    sigma = dx * sigma_factor
    n_cells = grid_width**2
    radius = int(dx) // 4
    cell_types_index = np.random.randint(0, max(1, len(c_coords) - 1), n_cells)

    log.info(
        f"Image of size ({len(c_coords), length, length}) with {n_cells} cells and {n_points_per_cell} transcripts per cell"
    )

    ### Compute cell vertices (xy array)
    vertices_x = dx / 2 + np.arange(grid_width) * dx
    x, y = np.meshgrid(vertices_x, vertices_x)
    xy = np.stack([x.ravel(), y.ravel()], axis=1)
    xy += np.random.uniform(-dx / 2, dx / 2, size=xy.shape)
    xy = xy.clip(0, length - 1).astype(int)

    vertices = pd.DataFrame(xy, columns=["x", "y"])

    ### Create image
    images = {}

    if include_image:
        x_circle, y_circle = circle_coords(radius)

        image = np.zeros((len(c_coords), length, length))
        for i, (x, y) in enumerate(xy):
            y_coords = (y + y_circle).clip(0, image.shape[1] - 1)
            x_coords = (x + x_circle).clip(0, image.shape[2] - 1)
            image[0, y_coords, x_coords] = 1
            if len(c_coords) > 1:
                image[cell_types_index[i] + 1, y_coords, x_coords] = 1
        if apply_blur:
            image = gaussian_filter(image, sigma=sigma, axes=(1, 2))
        image = (image / image.max() * 255).astype(np.uint8)
        image = da.from_array(image, chunks=(1, 1024, 1024))
        images["image"] = Image2DModel.parse(image, c_coords=c_coords, dims=["c", "y", "x"])

    ### Create cell boundaries
    cells = [Point(vertex).buffer(radius).simplify(tolerance=1) for vertex in xy]
    bbox = box(0, 0, length - 1, length - 1)
    cells = [cell.intersection(bbox) for cell in cells]
    gdf = gpd.GeoDataFrame(geometry=cells)
    shapes = {"cellpose_boundaries" if as_output else "cells": ShapesModel.parse(gdf)}

    ### Create transcripts
    n_genes = n_cells * n_points_per_cell
    point_cell_index = np.arange(n_cells).repeat(n_points_per_cell)
    points_coords = radius / 2 * np.random.randn(n_genes, 2) + xy[point_cell_index]
    points_coords = points_coords.clip(0, length - 1)

    if isinstance(genes, int):
        gene_names = np.random.choice([chr(97 + i) for i in range(n_genes)], size=n_genes)
    elif len(genes) and len(genes) >= len(c_coords) - 1:
        gene_names = np.full(n_genes, "", dtype="<U5")
        for i in range(len(genes)):
            where_cell_type = np.where(cell_types_index[point_cell_index] == i)[0]
            probabilities = np.full(len(genes), 0.2 / (len(genes) - 1))
            probabilities[i] = 0.8
            gene_names[where_cell_type] = np.random.choice(
                genes, len(where_cell_type), p=probabilities
            )
    else:
        gene_names = np.random.choice(genes, size=n_genes)

    df = pd.DataFrame(
        {
            "x": points_coords[:, 0],
            "y": points_coords[:, 1],
            "z": 1,
            "genes": gene_names,
        }
    )

    # apply an arbritrary transformation for a more complete test case
    affine = np.array([[pixel_size, 0, 100], [0, pixel_size, 600], [0, 0, 1]])
    df[["x", "y", "z"]] = df[["x", "y", "z"]] @ affine.T
    affine = Affine(affine, input_axes=["x", "y"], output_axes=["x", "y"]).inverse()

    df = dd.from_pandas(df, chunksize=2_000_000)

    points = {
        "transcripts": PointsModel.parse(
            df, transformations={"global": affine, "microns": Identity()}
        )
    }
    if include_vertices:
        points["vertices"] = PointsModel.parse(vertices)

    sdata = SpatialData(images=images, points=points, shapes=shapes)

    if as_output:
        _add_table(sdata)

    return sdata

sopa.io.blobs(*_, length=1024, n_points=10000, c_coords=['DAPI', 'CK', 'CD3', 'CD20'], **kwargs)

Adapts the blobs dataset from SpatialData for sopa. Please refer to the SpatialData documentation

Source code in sopa/utils/data.py
def blobs(
    *_,
    length: int = 1_024,
    n_points: int = 10_000,
    c_coords=["DAPI", "CK", "CD3", "CD20"],
    **kwargs,
) -> SpatialData:
    """Adapts the blobs dataset from SpatialData for sopa. Please refer to the SpatialData documentation"""
    _blobs = BlobsDataset(
        length=length, n_points=n_points, c_coords=c_coords, n_channels=len(c_coords), **kwargs
    )

    image = _blobs._image_blobs(
        _blobs.transformations,
        _blobs.length,
        _blobs.n_channels,
        _blobs.c_coords,
    )
    image.data = (image.data * 255).astype(np.uint8)

    points = _blobs._points_blobs(_blobs.transformations, _blobs.length, _blobs.n_points)
    genes = pd.Series(np.random.choice(list("abcdef"), size=len(points))).astype("category")
    points["genes"] = dd.from_pandas(genes, npartitions=points.npartitions)

    return SpatialData(images={"blob_image": image}, points={"blob_transcripts": points})

Xenium Explorer

sopa.io.write(path, sdata, image_key=None, shapes_key=None, points_key=None, gene_column=None, pixel_size=0.2125, layer=None, polygon_max_vertices=13, lazy=True, ram_threshold_gb=4, mode=None, save_h5ad=False)

Transform a SpatialData object into inputs for the Xenium Explorer. After running this function, double-click on the experiment.xenium file to open it.

Software download

Make sure you have the latest version of the Xenium Explorer

Note

This function will create up to 7 files, depending on the SpatialData object and the arguments:

  • experiment.xenium contains some experiment metadata. Double-click on this file to open the Xenium Explorer. This file can also be created with write_metadata.

  • morphology.ome.tif is the primary image. This file can also be created with write_image. Add more images with align.

  • analysis.zarr.zip contains the cells categories (or clusters), i.e. adata.obs. This file can also be created with write_cell_categories.

  • cell_feature_matrix.zarr.zip contains the cell-by-gene counts. This file can also be created with write_gene_counts.

  • cells.zarr.zip contains the cells polygon boundaries. This file can also be created with write_polygons.

  • transcripts.zarr.zip contains transcripts locations. This file can also be created with write_transcripts.

  • adata.h5ad is the AnnData object from the SpatialData. This is not used by the Explorer, but only saved for convenience.

Parameters:

Name Type Description Default
path str

Path to the directory where files will be saved.

required
sdata SpatialData

SpatialData object.

required
image_key str | None

Name of the image of interest (key of sdata.images).

None
shapes_key str | None

Name of the cell shapes (key of sdata.shapes).

None
points_key str | None

Name of the transcripts (key of sdata.points).

None
gene_column str | None

Column name of the points dataframe containing the gene names.

None
pixel_size float

Number of microns in a pixel. Invalid value can lead to inconsistent scales in the Explorer.

0.2125
layer str | None

Layer of the AnnData table where the gene counts are saved. If None, uses table.X.

None
polygon_max_vertices int

Maximum number of vertices for the cell polygons.

13
lazy bool

If True, will not load the full images in memory (except if the image memory is below ram_threshold_gb).

True
ram_threshold_gb int | None

Threshold (in gygabytes) from which image can be loaded in memory. If None, the image is never loaded in memory.

4
mode str

string that indicated which files should be created. "-ib" means everything except images and boundaries, while "+tocm" means only transcripts/observations/counts/metadata (each letter corresponds to one explorer file). By default, keeps everything.

None
save_h5ad bool

Whether to save the adata as h5ad in the explorer directory (for convenience only, since h5ad is faster to open than the original .zarr table)

False
Source code in sopa/io/explorer/converter.py
def write(
    path: str,
    sdata: SpatialData,
    image_key: str | None = None,
    shapes_key: str | None = None,
    points_key: str | None = None,
    gene_column: str | None = None,
    pixel_size: float = 0.2125,
    layer: str | None = None,
    polygon_max_vertices: int = 13,
    lazy: bool = True,
    ram_threshold_gb: int | None = 4,
    mode: str = None,
    save_h5ad: bool = False,
) -> None:
    """
    Transform a SpatialData object into inputs for the Xenium Explorer.
    After running this function, double-click on the `experiment.xenium` file to open it.

    !!! note "Software download"
        Make sure you have the latest version of the [Xenium Explorer](https://www.10xgenomics.com/support/software/xenium-explorer)

    Note:
        This function will create up to 7 files, depending on the `SpatialData` object and the arguments:

        - `experiment.xenium` contains some experiment metadata. Double-click on this file to open the Xenium Explorer. This file can also be created with [`write_metadata`](./#sopa.io.explorer.write_metadata).

        - `morphology.ome.tif` is the primary image. This file can also be created with [`write_image`](./#sopa.io.explorer.write_image). Add more images with `align`.

        - `analysis.zarr.zip` contains the cells categories (or clusters), i.e. `adata.obs`. This file can also be created with [`write_cell_categories`](./#sopa.io.explorer.write_cell_categories).

        - `cell_feature_matrix.zarr.zip` contains the cell-by-gene counts. This file can also be created with [`write_gene_counts`](./#sopa.io.explorer.write_gene_counts).

        - `cells.zarr.zip` contains the cells polygon boundaries. This file can also be created with [`write_polygons`](./#sopa.io.explorer.write_polygons).

        - `transcripts.zarr.zip` contains transcripts locations. This file can also be created with [`write_transcripts`](./#sopa.io.explorer.write_transcripts).

        - `adata.h5ad` is the `AnnData` object from the `SpatialData`. This is **not** used by the Explorer, but only saved for convenience.

    Args:
        path: Path to the directory where files will be saved.
        sdata: SpatialData object.
        image_key: Name of the image of interest (key of `sdata.images`).
        shapes_key: Name of the cell shapes (key of `sdata.shapes`).
        points_key: Name of the transcripts (key of `sdata.points`).
        gene_column: Column name of the points dataframe containing the gene names.
        pixel_size: Number of microns in a pixel. Invalid value can lead to inconsistent scales in the Explorer.
        layer: Layer of the AnnData table where the gene counts are saved. If `None`, uses `table.X`.
        polygon_max_vertices: Maximum number of vertices for the cell polygons.
        lazy: If `True`, will not load the full images in memory (except if the image memory is below `ram_threshold_gb`).
        ram_threshold_gb: Threshold (in gygabytes) from which image can be loaded in memory. If `None`, the image is never loaded in memory.
        mode: string that indicated which files should be created. "-ib" means everything except images and boundaries, while "+tocm" means only transcripts/observations/counts/metadata (each letter corresponds to one explorer file). By default, keeps everything.
        save_h5ad: Whether to save the adata as h5ad in the explorer directory (for convenience only, since h5ad is faster to open than the original .zarr table)
    """
    path: Path = Path(path)
    _check_explorer_directory(path)

    image_key, image = get_spatial_image(sdata, image_key, return_key=True)

    ### Saving cell categories and gene counts
    if SopaKeys.TABLE in sdata.tables:
        adata = sdata.tables[SopaKeys.TABLE]

        shapes_key = adata.uns["spatialdata_attrs"]["region"]
        geo_df = sdata[shapes_key]

        if _should_save(mode, "c"):
            write_gene_counts(path, adata, layer=layer)
        if _should_save(mode, "o"):
            write_cell_categories(path, adata)

    ### Saving cell boundaries
    if shapes_key is None:
        shapes_key, geo_df = get_boundaries(sdata, return_key=True, warn=True)
    else:
        geo_df = sdata[shapes_key]

    if _should_save(mode, "b") and geo_df is not None:
        geo_df = to_intrinsic(sdata, geo_df, image_key)

        if SopaKeys.TABLE in sdata.tables:
            geo_df = geo_df.loc[adata.obs[adata.uns["spatialdata_attrs"]["instance_key"]]]

        write_polygons(path, geo_df.geometry, polygon_max_vertices, pixel_size=pixel_size)

    ### Saving transcripts
    df = get_element(sdata, "points", points_key)

    if _should_save(mode, "t") and df is not None:
        if gene_column is not None:
            df = to_intrinsic(sdata, df, image_key)
            write_transcripts(path, df, gene_column, pixel_size=pixel_size)
        else:
            log.warn("The argument 'gene_column' has to be provided to save the transcripts")

    ### Saving image
    if _should_save(mode, "i"):
        write_image(
            path,
            sdata[image_key],
            lazy=lazy,
            ram_threshold_gb=ram_threshold_gb,
            pixel_size=pixel_size,
        )

    ### Saving experiment.xenium file
    if _should_save(mode, "m"):
        write_metadata(path, image_key, shapes_key, _get_n_obs(sdata, geo_df), pixel_size)

    if save_h5ad and SopaKeys.TABLE in sdata.tables:
        sdata.tables[SopaKeys.TABLE].write_h5ad(path / FileNames.H5AD)

    log.info(f"Saved files in the following directory: {path}")
    log.info(f"You can open the experiment with 'open {path / FileNames.METADATA}'")

sopa.io.align(sdata, image, transformation_matrix_path, image_key=None, overwrite=False)

Add an image to the SpatialData object after alignment with the Xenium Explorer.

Parameters:

Name Type Description Default
sdata SpatialData

A SpatialData object

required
image SpatialImage

A SpatialImage object. Note that image.name is used as the key for the aligned image.

required
transformation_matrix_path str

Path to the .csv transformation matrix exported from the Xenium Explorer

required
image_key str

Optional name of the image on which it has been aligned. Required if multiple images in the SpatialData object.

None
overwrite bool

Whether to overwrite the image, if already existing.

False
Source code in sopa/io/explorer/images.py
def align(
    sdata: SpatialData,
    image: SpatialImage,
    transformation_matrix_path: str,
    image_key: str = None,
    overwrite: bool = False,
):
    """Add an image to the `SpatialData` object after alignment with the Xenium Explorer.

    Args:
        sdata: A `SpatialData` object
        image: A `SpatialImage` object. Note that `image.name` is used as the key for the aligned image.
        transformation_matrix_path: Path to the `.csv` transformation matrix exported from the Xenium Explorer
        image_key: Optional name of the image on which it has been aligned. Required if multiple images in the `SpatialData` object.
        overwrite: Whether to overwrite the image, if already existing.
    """
    image_name = image.name

    to_pixel = Affine(
        np.genfromtxt(transformation_matrix_path, delimiter=","),
        input_axes=("x", "y"),
        output_axes=("x", "y"),
    )

    default_image = get_spatial_image(sdata, image_key)
    pixel_cs = get_intrinsic_cs(sdata, default_image)

    set_transformation(image, {pixel_cs: to_pixel}, set_all=True)

    log.info(f"Adding image {image_name}:\n{image}")
    sdata.images[image_name] = image
    save_image(sdata, image_name, overwrite=overwrite)

sopa.io.add_explorer_selection(sdata, path, shapes_key, image_key=None, pixel_size=0.2125)

After saving a selection on the Xenium Explorer, it will add all polygons inside sdata.shapes[shapes_key]

Parameters:

Name Type Description Default
sdata SpatialData

A SpatialData object

required
path str

The path to the coordinates.csv selection file

required
shapes_key str

The name to provide to the shapes

required
image_key str | None

The original image name

None
pixel_size float

Number of microns in a pixel. It must be the same value as the one used in sopa.io.write

0.2125
Source code in sopa/io/explorer/utils.py
def add_explorer_selection(
    sdata: SpatialData,
    path: str,
    shapes_key: str,
    image_key: str | None = None,
    pixel_size: float = 0.2125,
):
    """After saving a selection on the Xenium Explorer, it will add all polygons inside `sdata.shapes[shapes_key]`

    Args:
        sdata: A `SpatialData` object
        path: The path to the `coordinates.csv` selection file
        shapes_key: The name to provide to the shapes
        image_key: The original image name
        pixel_size: Number of microns in a pixel. It must be the same value as the one used in `sopa.io.write`
    """
    polys = xenium_explorer_selection(path, pixel_size=pixel_size, return_list=True)
    image = get_element(sdata, "images", image_key)

    transformations = get_transformation(image, get_all=True).copy()

    sdata.shapes[shapes_key] = ShapesModel.parse(
        gpd.GeoDataFrame(geometry=polys), transformations=transformations
    )

sopa.io.int_cell_id(explorer_cell_id)

Transforms an alphabetical cell id from the Xenium Explorer to an integer ID

E.g., int_cell_id('aaaachba-1') = 10000

Source code in sopa/io/explorer/utils.py
def int_cell_id(explorer_cell_id: str) -> int:
    """Transforms an alphabetical cell id from the Xenium Explorer to an integer ID

    E.g., int_cell_id('aaaachba-1') = 10000"""
    code = explorer_cell_id[:-2] if explorer_cell_id[-2] == "-" else explorer_cell_id
    coefs = [ord(c) - 97 for c in code][::-1]
    return sum(value * 16**i for i, value in enumerate(coefs))

sopa.io.str_cell_id(cell_id)

Transforms an integer cell ID into an Xenium Explorer alphabetical cell id

E.g., str_cell_id(10000) = 'aaaachba-1'

Source code in sopa/io/explorer/utils.py
def str_cell_id(cell_id: int) -> str:
    """Transforms an integer cell ID into an Xenium Explorer alphabetical cell id

    E.g., str_cell_id(10000) = 'aaaachba-1'"""
    coefs = []
    for _ in range(8):
        cell_id, coef = divmod(cell_id, 16)
        coefs.append(coef)
    return "".join([chr(97 + coef) for coef in coefs][::-1]) + "-1"

sopa.io.write_image(path, image, lazy=True, tile_width=TILE_SIZE, n_subscales=5, pixel_size=0.2125, ram_threshold_gb=4, is_dir=True)

Convert an image into a morphology.ome.tif file that can be read by the Xenium Explorer

Parameters:

Name Type Description Default
path str

Path to the Xenium Explorer directory where the image will be written

required
image MultiscaleSpatialImage | SpatialImage | ndarray

Image of shape (C, Y, X)

required
lazy bool

If False, the image will not be read in-memory (except if the image size is below ram_threshold_gb). If True, all the images levels are always loaded in-memory.

True
tile_width int

Xenium tile width (do not update).

TILE_SIZE
n_subscales int

Number of sub-scales in the pyramidal image.

5
pixel_size float

Xenium pixel size (do not update).

0.2125
ram_threshold_gb int | None

If an image (of any level of the pyramid) is below this threshold, it will be loaded in-memory.

4
is_dir bool

If False, then path is a path to a single file, not to the Xenium Explorer directory.

True
Source code in sopa/io/explorer/images.py
def write_image(
    path: str,
    image: MultiscaleSpatialImage | SpatialImage | np.ndarray,
    lazy: bool = True,
    tile_width: int = TILE_SIZE,
    n_subscales: int = 5,
    pixel_size: float = 0.2125,
    ram_threshold_gb: int | None = 4,
    is_dir: bool = True,
):
    """Convert an image into a `morphology.ome.tif` file that can be read by the Xenium Explorer

    Args:
        path: Path to the Xenium Explorer directory where the image will be written
        image: Image of shape `(C, Y, X)`
        lazy: If `False`, the image will not be read in-memory (except if the image size is below `ram_threshold_gb`). If `True`, all the images levels are always loaded in-memory.
        tile_width: Xenium tile width (do not update).
        n_subscales: Number of sub-scales in the pyramidal image.
        pixel_size: Xenium pixel size (do not update).
        ram_threshold_gb: If an image (of any level of the pyramid) is below this threshold, it will be loaded in-memory.
        is_dir: If `False`, then `path` is a path to a single file, not to the Xenium Explorer directory.
    """
    path = explorer_file_path(path, FileNames.IMAGE, is_dir)

    if isinstance(image, np.ndarray):
        assert len(image.shape) == 3, "Can only write channels with shape (C,Y,X)"
        log.info(f"Converting image of shape {image.shape} into a SpatialImage (with dims: C,Y,X)")
        image = SpatialImage(image, dims=["c", "y", "x"], name="image")

    image = _to_xenium_explorer_multiscale(image, n_subscales)

    image_writer = MultiscaleImageWriter(image, pixel_size=pixel_size, tile_width=tile_width)
    image_writer.write(path, lazy=lazy, ram_threshold_gb=ram_threshold_gb)

sopa.io.save_column_csv(path, adata, key)

Save one column of the AnnData object as a CSV that can be open interactively in the explorer, under the "cell" panel.

Parameters:

Name Type Description Default
path str

Path where to write the CSV that will be open in the Xenium Explorer

required
adata AnnData

An AnnData object

required
key str

Key of adata.obs containing the column to convert

required
Source code in sopa/io/explorer/table.py
def save_column_csv(path: str, adata: AnnData, key: str):
    """Save one column of the AnnData object as a CSV that can be open interactively in the explorer, under the "cell" panel.

    Args:
        path: Path where to write the CSV that will be open in the Xenium Explorer
        adata: An `AnnData` object
        key: Key of `adata.obs` containing the column to convert
    """
    df = pd.DataFrame({"cell_id": adata.obs_names, "group": adata.obs[key].values})
    df.to_csv(path, index=None)

Report

sopa.io.write_report(path, sdata)

Create a HTML report (or web report) after running Sopa.

Note

This report is automatically generated based on a custom python-to-html engine

Parameters:

Name Type Description Default
path str

Path to the .html report that has to be created

required
sdata SpatialData

A SpatialData object, after running Sopa

required
Source code in sopa/io/report/generate.py
def write_report(path: str, sdata: SpatialData):
    """Create a HTML report (or web report) after running Sopa.

    Note:
        This report is automatically generated based on a custom python-to-html engine

    Args:
        path: Path to the `.html` report that has to be created
        sdata: A `SpatialData` object, after running Sopa
    """
    sections = SectionBuilder(sdata).compute_sections()

    log.info(f"Writing report to {path}")
    Root(sections).write(path)