integrations.sh
← all integrations

Oxford Dictionaries

OpenAPI apis-guru text
Homepage
https://api.apis.guru/v2/specs/oxforddictionaries.com/1.11.0.json
Provider
oxforddictionaries.com
OpenAPI version
3.0.0
Spec (JSON)
https://api.apis.guru/v2/specs/oxforddictionaries.com/1.11.0/openapi.json
Spec (YAML)
https://api.apis.guru/v2/specs/oxforddictionaries.com/1.11.0/openapi.yaml

Tools (28)

Extracted live via the executor SDK.

  • dictionaryEntries.getEntriesSourceLangWordId

    Use this to retrieve definitions, pronunciations [blocked], example sentences, grammatical information [blocked] and word origins [blocked]. It only works for dictionary headwords [blocked], so you may need to use the Lemmatron [blocked] first if your input is likely to be an inflected [blocked] form (e.g., 'swimming'). This would return the linked headword [blocked] (e.g., 'swim') which you can then use in the Entries endpoint. Unless specified using a region filter, the default lookup will be the Oxford Dictionary of English (GB).

  • dictionaryEntries.getEntriesSourceLangWordIdFilters

    Use filters to limit the entry [blocked] information that is returned. For example, you may only require definitions and not everything else, or just pronunciations [blocked]. The full list of filters can be retrieved from the filters Utility endpoint. You can also specify values within the filter using '='. For example 'grammaticalFeatures=singular'. Filters can also be combined using a semicolon.

  • dictionaryEntries.getEntriesSourceLangWordIdRegionsRegion

    USe this filter to restrict the lookup to either our Oxford Dictionary of English (GB) or New Oxford American Dictionary (US).

  • lemmatron.getInflectionsSourceLangWordIdFilters

    Use this to check if a word exists in the dictionary, or what 'root' form it links to (e.g., swimming > swim). The response tells you the possible lemmas [blocked] for a given inflected [blocked] word. This can then be combined with other endpoints to retrieve more information.

  • lexiStats.getStatsFrequencyNgramsSourceLangCorpusNgramSize

    This endpoint returns frequencies of ngrams of size 1-4. That is the number of times a word (ngram size = 1) or words (ngram size > 1) appear in the corpus. Ngrams are case sensitive ("I AM" and "I am" will have different frequency) and frequencies are calculated per word (true case) so "the book" and "the books" are two different ngrams. The results can be filtered based on query parameters.

    Parameters can be provided in PATH, GET or POST (form or json). The parameters in PATH are overridden by parameters in GET, POST and json (in that order). In PATH, individual options are separated by semicolon and values are separated by commas (where multiple values can be used).

    Example for bigrams (ngram of size 2):

    • PATH: /tokens=a word,another word

    • GET: /?tokens=a word&tokens=another word

    • POST (json):

      javascript
        {      "tokens": ["a word", "another word"]  }

    Either "tokens" or "contains" has to be provided.

    Some queries with "contains" or "sort" can exceed the 30s timeout, in which case the API will return an error message with status code 503. You mitigate this by providing additional restrictions such as "minFrequency" and "maxFrequency".

    You can use the parameters "offset" and "limit" to paginate through large result sets. For convenience, the HTTP header "Link" is set on the response to provide links to "first", "self", "next", "prev" and "last" pages of results (depending on the context). For example, if the result set contains 50 results and the parameter "limit" is set to 25, the Links header will contain an URL for the first 25 results and the next 25 results.

    Some libraries such as python's requests can parse the header automatically and offer a convenient way of iterating through the results. For example:

    python
        while url:        r = requests.get(url)        r.raise_for_status()        for item in r.json()['results']:          yield item        url = r.links.get('next', {}).get('url')
  • lexiStats.getStatsFrequencyWordSourceLang

    This endpoint provides the frequency of a given word. When multiple database records match the query parameters, the returned frequency is the sum of the individual frequencies. For example, if the query parameters are lemma=test, the returned frequency will include the verb "test", the noun "test" and the adjective "test" in all forms (Test, tested, testing, etc.)

    If you are interested in the frequency of the word "test" but want to exclude other forms (e.g., tested) use the option trueCase=test. Normally, the word "test" will be spelt with a capital letter at the beginning of a sentence. The option trueCase will ignore this and it will count "Test" and "test" as the same token. If you are interested in frequencies of "Test" and "test", use the option wordform=test or wordform=Test. Note that trueCase is not just a lower case of the word as some words are genuinely spelt with a capital letter such as the word "press" in Oxford University Press.

    Parameters can be provided in PATH, GET or POST (form or json). The parameters in PATH are overriden by parameters in GET, POST and json (in that order). In PATH, individual options are separated by semicolon and values are separated by commas (where multiple values can be used). Examples:

    • PATH: /lemma=test;lexicalCategory=noun

    • GET: /?lemma=test&lexicalCategory=noun

    • POST (json):

      javascript
        {    "lemma": "test",    "lexicalCategory": "noun"  }


    One of the options wordform/trueCase/lemma/lexicalCategory has to be provided.

  • lexiStats.getStatsFrequencyWordsSourceLang

    This endpoint provides a list of frequencies for a given word or words. Unlike the /word/ endpoint, the results are split into the smallest units.

    To exclude a specific value, prepend it with the minus sign ('-'). For example, to get frequencies of the lemma 'happy' but exclude superlative forms (i.e., happiest) you could use options 'lemma=happy;grammaticalFeatures=-degreeType:superlative'.

    Parameters can be provided in PATH, GET or POST (form or json). The parameters in PATH are overridden by parameters in GET, POST and json (in that order). In PATH, individual options are separated by semicolon and values are separated by commas (where multiple values can be used).

    The parameters wordform/trueCase/lemma/lexicalCategory also exist in a plural form, taking a lists of items. Examples:

    • PATH: /wordforms=happy,happier,happiest
    • GET: /?wordforms=happy&wordforms=happier&wordforms=happiest
    • POST (json):
    javascript
      {    "wordforms": ["happy", "happier", "happiest"]  }

    A mor complex example of retrieving frequencies of multiple lemmas:

      {      "lemmas": ["happy", "content", "cheerful", "cheery", "merry", "joyful", "ecstatic"],      "grammaticalFeatures": {          "adjectiveFunctionType": "predicative"      },      "lexicalCategory": "adjective",      "sort": ["lemma", "-frequency"]  }

    Some queries with "collate" or "sort" can exceed the 30s timeout, in which case the API will return an error message with status code 503. You mitigate this by providing additional restrictions such as "minFrequency" and "maxFrequency".

    You can use the parameters "offset" and "limit" to paginate through large result sets. For convenience, the HTTP header "Link" is set on the response to provide links to "first", "self", "next", "prev" and "last" pages of results (depending on the context). For example, if the result set contains 50 results and the parameter "limit" is set to 25, the Links header will contain an URL for the first 25 results and the next 25 results.

    Some libraries such as python's requests can parse the header automatically and offer a convenient way of iterating through the results. For example:

    python
        while url:        r = requests.get(url)        r.raise_for_status()        for item in r.json()['results']:          yield item        url = r.links.get('next', {}).get('url')
  • search.getSearchSourceLang

    Use this to retrieve possible headword [blocked] matches for a given string of text. The results are culculated using headword matching, fuzzy matching, and lemmatization [blocked]

  • search.getSearchSourceSearchLanguageTranslationsTargetSearchLanguage

    Use this to find matches in our translation dictionaries.

  • thesaurus.getEntriesSourceLangWordIdAntonyms

    Retrieve words that are opposite in meaning to the input word (antonym [blocked]).

  • thesaurus.getEntriesSourceLangWordIdSynonyms

    Use this to retrieve words that are similar in meaning to the input word (synonym [blocked]).

  • thesaurus.getEntriesSourceLangWordIdSynonymsAntonyms

    Retrieve available synonyms [blocked] and antonyms [blocked] for a given word and language.

  • theSentenceDictionary.getEntriesSourceLanguageWordIdSentences

    Use this to retrieve sentences extracted from corpora which show how a word is used in the language. This is available for English and Spanish. For English, the sentences are linked to the correct sense [blocked] of the word in the dictionary. In Spanish, they are linked at the headword [blocked] level.

  • translation.getEntriesSourceTranslationLanguageWordIdTranslationsTargetTranslationLanguage

    Use this to return translations for a given word. In the event that a word in the dataset does not have a direct translation, the response will be a definition [blocked] in the target language.

  • utility.getDomainsSourceDomainsLanguageTargetDomainsLanguage

    Returns a list of the available domains [blocked] for a given bilingual language dataset.

  • utility.getDomainsSourceLanguage

    Returns a list of the available domains [blocked] for a given monolingual language dataset.

  • utility.getFilters

    Returns a list of all the valid filters to construct API calls.

  • utility.getFiltersEndpoint

    Returns a list of all the valid filters for a given endpoint to construct API calls.

  • utility.getGrammaticalFeaturesSourceLanguage

    Returns a list of the available grammatical features [blocked] for a given language dataset.

  • utility.getLanguages

    Returns a list of monolingual and bilingual language datasets available in the API

  • utility.getLexicalcategoriesLanguage

    Returns a list of available lexical categories [blocked] for a given language dataset.

  • utility.getRegionsSourceLanguage

    Returns a list of the available regions [blocked] for a given monolingual language dataset.

  • utility.getRegistersSourceLanguage

    Returns a list of the available registers [blocked] for a given monolingual language dataset.

  • utility.getRegistersSourceRegisterLanguageTargetRegisterLanguage

    Returns a list of the available registers [blocked] for a given bilingual language dataset.

  • wordlist.getWordlistSourceLangFiltersAdvanced

    Use this to apply more complex filters to the list of words [blocked]. For example, you may only want to filter out words for which all senses [blocked] match the filter, or only its 'prime sense'. You can also filter by word length or match by substring (prefix).

  • wordlist.getWordlistSourceLangFiltersBasic

    Use this to retrieve a list of words [blocked] for particular domain [blocked], lexical category [blocked], register [blocked] and/or region [blocked]. View the full list of possible filters using the filters Utility endpoint. The response only includes headwords [blocked], not all their possible inflections [blocked]. If you require a full wordlist [blocked] including inflected forms [blocked], contact us and we can help.

  • openapi.previewSpec

    Preview an OpenAPI document before adding it as a source

  • openapi.addSource

    Add an OpenAPI source and register its operations as tools