Items

The main goal in scraping is to extract structured data from unstructured sources, typically, web pages. Spiders may return the extracted data as items, Python objects that define key-value pairs.

Scrapy supports multiple types of items. When you create an item, you may use whichever type of item you want. When you write code that receives an item, your code should work for any item type.

Item Types

Scrapy supports the following types of items: dictionaries, dataclass objects, Item objects, and custom items.

Dictionaries

As an item type, dict is convenient and familiar.

Dataclass objects

New in version 2.1.

dataclass() allows defining item classes with field names, so that item exporters can export all fields by default even if the first scraped object does not have values for all of them.

Dataclasses also allow defining the type and default value of each defined field.

Note

Field names and types are not enforced at run time.

They work natively in Python 3.7 or later, or using the dataclasses backport in Python 3.6.

Item objects

Item provides a dict-like API plus additional features that make it the most feature-complete item type:

class scrapy.item.Item([arg])[source]

Item objects replicate the standard dict API, including its __init__ method.

Item allows defining field names, so that:

  • KeyError is raised when using undefined field names (i.e. prevents typos going unnoticed)

  • Item exporters can export all fields by default even if the first scraped object does not have values for all of them

Item also allows defining field metadata, which can be used to customize serialization.

trackref tracks Item objects to help find memory leaks (see Debugging memory leaks with trackref).

Item objects also provide the following additional API members:

copy()
deepcopy()

Return a deepcopy() of this item.

fields

A dictionary containing all declared fields for this Item, not only those populated. The keys are the field names and the values are the Field objects used in the Item declaration.

Custom items

Subclass BaseItem to define additional item types not based on any of the above:

class scrapy.item.BaseItem[source]

Base class for item types that do not subclass any other supported item type.

BaseItem instances may be tracked to debug memory leaks.

Note that, while is_item_like() returns True for any instance of a BaseItem subclass, ItemAdapter may not work as expected with your custom item objects, specially if they do not implement the same API as one of the ref:supported item type <item-types>.

Working with Item objects

Declaring Item subclasses

Item subclasses are declared using a simple class definition syntax and Field objects. Here is an example:

import scrapy

class Product(scrapy.Item):
    name = scrapy.Field()
    price = scrapy.Field()
    stock = scrapy.Field()
    tags = scrapy.Field()
    last_updated = scrapy.Field(serializer=str)

Note

Those familiar with Django will notice that Scrapy Items are declared similar to Django Models, except that Scrapy Items are much simpler as there is no concept of different field types.

Declaring fields

Field objects are used to specify metadata for each field. For example, the serializer function for the last_updated field illustrated in the example above.

You can specify any kind of metadata for each field. There is no restriction on the values accepted by Field objects. For this same reason, there is no reference list of all available metadata keys. Each key defined in Field objects could be used by a different component, and only those components know about it. You can also define and use any other Field key in your project too, for your own needs. The main goal of Field objects is to provide a way to define all field metadata in one place. Typically, those components whose behaviour depends on each field use certain field keys to configure that behaviour. You must refer to their documentation to see which metadata keys are used by each component.

It’s important to note that the Field objects used to declare the item do not stay assigned as class attributes. Instead, they can be accessed through the Item.fields attribute.

class scrapy.item.Field([arg])[source]

The Field class is just an alias to the built-in dict class and doesn’t provide any extra functionality or attributes. In other words, Field objects are plain-old Python dicts. A separate class is used to support the item declaration syntax based on class attributes.

Working with Item objects

Here are some examples of common tasks performed with items, using the Product item declared above. You will notice the API is very similar to the dict API.

Creating items

>>> product = Product(name='Desktop PC', price=1000)
>>> print(product)
Product(name='Desktop PC', price=1000)

Getting field values

>>> product['name']
Desktop PC
>>> product.get('name')
Desktop PC
>>> product['price']
1000
>>> product['last_updated']
Traceback (most recent call last):
    ...
KeyError: 'last_updated'
>>> product.get('last_updated', 'not set')
not set
>>> product['lala'] # getting unknown field
Traceback (most recent call last):
    ...
KeyError: 'lala'
>>> product.get('lala', 'unknown field')
'unknown field'
>>> 'name' in product  # is name field populated?
True
>>> 'last_updated' in product  # is last_updated populated?
False
>>> 'last_updated' in product.fields  # is last_updated a declared field?
True
>>> 'lala' in product.fields  # is lala a declared field?
False

Setting field values

>>> product['last_updated'] = 'today'
>>> product['last_updated']
today
>>> product['lala'] = 'test' # setting unknown field
Traceback (most recent call last):
    ...
KeyError: 'Product does not support field: lala'

Accessing all populated values

To access all populated values, just use the typical dict API:

>>> product.keys()
['price', 'name']
>>> product.items()
[('price', 1000), ('name', 'Desktop PC')]

Copying items

To copy an item, you must first decide whether you want a shallow copy or a deep copy.

If your item contains mutable values like lists or dictionaries, a shallow copy will keep references to the same mutable values across all different copies.

For example, if you have an item with a list of tags, and you create a shallow copy of that item, both the original item and the copy have the same list of tags. Adding a tag to the list of one of the items will add the tag to the other item as well.

If that is not the desired behavior, use a deep copy instead.

See copy for more information.

To create a shallow copy of an item, you can either call copy() on an existing item (product2 = product.copy()) or instantiate your item class from an existing item (product2 = Product(product)).

To create a deep copy, call deepcopy() instead (product2 = product.deepcopy()).

Other common tasks

Creating dicts from items:

>>> dict(product) # create a dict from all populated values
{'price': 1000, 'name': 'Desktop PC'}

Creating items from dicts:

>>> Product({'name': 'Laptop PC', 'price': 1500})
Product(price=1500, name='Laptop PC')
>>> Product({'name': 'Laptop PC', 'lala': 1500}) # warning: unknown field in dict
Traceback (most recent call last):
    ...
KeyError: 'Product does not support field: lala'

Extending Item subclasses

You can extend Items (to add more fields or to change some metadata for some fields) by declaring a subclass of your original Item.

For example:

class DiscountedProduct(Product):
    discount_percent = scrapy.Field(serializer=str)
    discount_expiration_date = scrapy.Field()

You can also extend field metadata by using the previous field metadata and appending more values, or changing existing values, like this:

class SpecificProduct(Product):
    name = scrapy.Field(Product.fields['name'], serializer=my_serializer)

That adds (or replaces) the serializer metadata key for the name field, keeping all the previously existing metadata values.

Supporting All Item Types

In code that receives an item, such as methods of item pipelines or spider middlewares, it is a good practice to use the ItemAdapter class and the is_item_like() function to write code that works for any supported item type:

class scrapy.utils.item.ItemAdapter(item)[source]

Wrapper class to interact with any supported item type using the same, dict-like API.

>>> from dataclasses import dataclass
>>> from scrapy.utils.item import ItemAdapter
>>> @dataclass
... class InventoryItem:
...     name: str
...     price: int
...
>>> item = InventoryItem(name="foo", price=10)
>>> adapter = ItemAdapter(item)
>>> adapter.item is item
True
>>> adapter["name"]
'foo'
>>> adapter["name"] = "bar"
>>> adapter["price"] = 5
>>> item
InventoryItem(name='bar', price=5)
scrapy.utils.item.is_item_like(obj)[source]

Return True if obj is an instance of a supported item type; return False otherwise.