audio transcription with whisper from R

Last week, OpenAI released version 2 of an updated neural net called Whisper that approaches human level robustness and accuracy on speech recognition. You can now directly call from R a C/C++ inference engine which allow you to transcribe .wav audio files.

logo audio whisper x100

To allow to easily do this in R, BNOSAC created an R wrapper around the whisper.cpp code. This R package is available at https://github.com/bnosac/audio.whisper and can be installed as follows. 

remotes::install_github("bnosac/audio.whisper")

The following code shows how you can transcribe an example 16-bit wav file with a fragment of a speech by JFK available here

library(audio.whisper)
model <- whisper("tiny")
path  <- system.file(package = "audio.whisper", "samples", "jfk.wav")
trans <- predict(model, newdata = path, language = "en", n_threads = 2)
trans
$n_segments
[1] 1

$data
 segment         from           to                                                                                                       text
       1 00:00:00.000 00:00:11.000  And so my fellow Americans ask not what your country can do for you ask what you can do for your country.

$tokens
 segment      token token_prob
       1        And  0.7476438
       1         so  0.9042299
       1         my  0.6872202
       1     fellow  0.9984470
       1  Americans  0.9589157
       1        ask  0.2573057
       1        not  0.7678108
       1       what  0.6542882
       1       your  0.9386917
       1   counstry  0.9854987
       1        can  0.9813995
       1         do  0.9937403
       1        for  0.9791515
       1        you  0.9925495
       1        ask  0.3058807
       1       what  0.8303462
       1        you  0.9735528
       1        can  0.9711444
       1         do  0.9616748
       1        for  0.9778513
       1       your  0.9604713
       1    country  0.9923630
       1          .  0.4983074

Another example based on a Micro Machines commercial from the 1980's.

I've always wanted to get the transcription of the performances of Francis E. Dec available on UbuWeb Sound - Francis E. Dec like this performance: https://www.ubu.com/media/sound/dec_francis/Dec-Francis-E_rant1.mp3. This is how you can now do that from R.

library(av)
download.file(url = "https://www.ubu.com/media/sound/dec_francis/Dec-Francis-E_rant1.mp3", 
destfile = "rant1.mp3", mode = "wb") av_audio_convert("rant1.mp3", output = "output.wav", format = "wav", sample_rate = 16000)

trans <- predict(model, newdata = "output.wav", language = "en", duration = 30 * 1000, offset = 7 * 1000, token_timestamps = TRUE) trans $n_segments [1] 11 $data segment from to text 1 00:00:07.000 00:00:09.000 Look at the picture. 2 00:00:09.000 00:00:11.000 See the skull. 3 00:00:11.000 00:00:13.000 The part of bone removed. 4 00:00:13.000 00:00:16.000 The master race Frankenstein radio controls. 5 00:00:16.000 00:00:18.000 The brain thoughts broadcasting radio. 6 00:00:18.000 00:00:21.000 The eyesight television. The Frankenstein earphone radio. 7 00:00:21.000 00:00:25.000 The threshold brain wash radio. The latest new skull reforming. 8 00:00:25.000 00:00:28.000 To contain all Frankenstein controls. 9 00:00:28.000 00:00:31.000 Even in thin skulls of white pedigree males. 10 00:00:31.000 00:00:34.000 Visible Frankenstein controls. 11 00:00:34.000 00:00:37.000 The synthetic nerve radio, directional and an alloop. $tokens segment token token_prob token_from token_to 1 Look 0.4281234 00:00:07.290 00:00:07.420 1 at 0.9485379 00:00:07.420 00:00:07.620 1 the 0.9758387 00:00:07.620 00:00:07.940 1 picture 0.9734664 00:00:08.150 00:00:08.580 1 . 0.9688568 00:00:08.680 00:00:08.910 2 See 0.9847929 00:00:09.000 00:00:09.420 2 the 0.7588121 00:00:09.420 00:00:09.840 2 skull 0.9989663 00:00:09.840 00:00:10.310 2 . 0.9548351 00:00:10.550 00:00:11.000 3 The 0.9914295 00:00:11.000 00:00:11.170 3 part 0.9789217 00:00:11.560 00:00:11.600 3 of 0.9958754 00:00:11.600 00:00:11.770 3 bone 0.9759618 00:00:11.770 00:00:12.030 3 removed 0.9956936 00:00:12.190 00:00:12.710 3 . 0.9965582 00:00:12.710 00:00:12.940
...

Maybe in the near future we will put it on CRAN, currently it is only at https://github.com/bnosac/audio.whisper.

Get in touch if you are interested in this and let us know what you plan to use it for. 

Image Annotation

This week, I uploaded a newer version of the R package recogito to CRAN.

The recogito R package provides tools to manipulate and annotate images and text in shiny. It is a htmlwidgets R wrapper around the excellent recogito-js and annotorious javascript libraries as well as it's integration with openseadragon.
You can use the package to set up shiny apps which

  • annotate areas of interest (rectangles / polygons) in images with specific labels
  • annotate text using tags and relations between these tags (for entity labelling / entity linking).

The video below shows the image manipulation functionality in action in a shiny app which allows to align image areas with chunks of transcribed handwritten texts.

Although the package was orginally designed to extract information from handwritten text documents from the 18th-19th century, you can probably use it in other domains as well.
To get you started install the package from CRAN and read the README.

install.packages("recogito")

The following code shows an example app which shows an url and allows you to annotate areas of interest. Enjoy.

library(shiny)
library(recogito)
url <- "https://upload.wikimedia.org/wikipedia/commons/a/a0/Pamphlet_dutch_tulipomania_1637.jpg"
ui <- fluidPage(openseadragonOutput(outputId = "anno", height = "700px"),
tags$h3("Results"),
verbatimTextOutput(outputId = "annotation_result"))
server <- function(input, output) {
current_img <- reactiveValues(url = url)
output$anno <- renderOpenSeaDragon({
annotorious(inputId = "results", src = current_img$url, tags = c("IMAGE", "TEXT"), type = "openseadragon")
})
output$annotation_result <- renderPrint({
read_annotorious(input$results)
})
}
shinyApp(ui, server)

recogito example

doc2vec in R

Learn how to apply doc2vec in R on your text in this pdf presentation available at https://www.bnosac.be/index.php/blog/103-doc2vec-in-R. Where we focus on our R package doc2vec available at https://github.com/bnosac/doc2vec

word2vec githubYou can view the presentation below.

NEW, since 2020, you can now access courses Text Mining with R and Advanced R programming online through our online school, let us know here if you want to obtain access.

Loading...

Enjoy.

udpipe R package updated

An update of the udpipe R package (https://bnosac.github.io/udpipe/en) landed safely on CRAN last week. Originally the udpipe R package was put on CRAN in 2017 wrapping the UDPipe (v1.2 C++) tokeniser/lemmatiser/parts of speech tagger and dependency parser. It now has many more functionalities next to just providing this parser.

The current release (0.8.4-1 on CRAN: https://cran.r-project.org/package=udpipe) makes sure default models which are used are the ones trained on version 2.5 of universal dependencies. Other features of the release are detailed in the NEWS item. This is what dependency parsing looks like on some sample text.

library(udpipe)
x <- udpipe("The package provides a dependency parsers built on data from universaldependencies.org", "english")
View(x)
library(ggraph)
library(ggplot2)
library(igraph)
library(textplot)
plt <- textplot_dependencyparser(x, size = 4, title = "udpipe R package - dependency parsing")
plt

udpipe parser plot

During the years, the toolkit has now also incorporated many functionalities for commonly used data manipulations on texts which are enriched with the output of the parser.  Namely functionalities and algorithms for collocations, token co-occurrence, document term matrix handling, term frequency inverse document frequency calculations,  information retrieval metrics, handling of multi-word expressions, keyword detection (Rapid Automatic Keyword Extraction, noun phrase extraction, syntactical patterns) sentiment scoring and semantic similarity analysis.

Many add-on R packages

The udpipe package is loosely coupled with other NLP R packages which BNOSAC released in the last 4 years on CRAN. Loosely coupled means that none of the packages have hard dependencies of one another making it easy to install and maintain and allowing you to use only the packages and tools that you want.

Hereby a small list of loosely coupled packages by BNOSAC which contain functions and documentation where the udpipe package is used as a preprocessing step.

- BTM: Biterm Topic Modelling
- crfsuite: Build named entity recognition models using conditional random fields
- nametagger: Build named entity recognition models using markov models
- torch.ner: Named Entity Recognition using torch
- word2vec: Training and applying the word2vec algorithm
- ruimtehol: Text embedding techniques using Starspace
- textrank: Text summarisation and keyword detection using textrank
- brown: Brown word clustering on texts
- sentencepiece: Byte Pair Encoding and Unigram tokenisation using sentencepiece
- tokenizers.bpe: Byte Pair Encoding tokenisation using YouTokenToMe
- text.alignment: Find text similarities using Smith-Waterman
- textplot: Visualise complex relations in texts

textplot example

Model building example

To showcase the loose integration, let's use the udpipe package alongside the word2vec package to build a udpipe model by ourselves on the German GSD treebank which is described at https://universaldependencies.org/treebanks/de_gsd/index.html and contains a set of CC BY-SA licensed annotated texts from news articles, wiki entries and reviews.
More information at https://universaldependencies.org.

Download the treebank.

library(utils)
settings <- list()
settings$ud.train    <- "https://raw.githubusercontent.com/UniversalDependencies/UD_German-GSD/r2.6/de_gsd-ud-train.conllu"
settings$ud.dev      <- "https://raw.githubusercontent.com/UniversalDependencies/UD_German-GSD/r2.6/de_gsd-ud-dev.conllu"
settings$ud.test     <- "https://raw.githubusercontent.com/UniversalDependencies/UD_German-GSD/r2.6/de_gsd-ud-test.conllu"
## Download the conllu files
download.file(url = settings$ud.train, destfile = "train.conllu")
download.file(url = settings$ud.dev,   destfile = "dev.conllu")
download.file(url = settings$ud.test,  destfile = "test.conllu")

Build a word2vec model using out R package word2vec

  • Create wordvectors on the downloaded training dataset as these are used for training the dependency parser
  • Save the word vectors to disk
  • Inspect a bit the word2vec model by showing similarities to some German words
library(udpipe)
library(word2vec)
txt <- udpipe_read_conllu("train.conllu")
txt <- paste.data.frame(txt, term = "token", group = c("doc_id", "paragraph_id", "sentence_id"), collapse = " ")
txt <- txt$token
w2v <- word2vec(txt, type = "skip-gram", dim = 50, window = 10, min_count = 2, negative = 5, iter = 15, threads = 1)
write.word2vec(w2v, file = "wordvectors.vec", type = "txt", encoding = "UTF-8")
predict(w2v, c("gut", "freundlich"), type = "nearest", top = 20)

And train the model

  • Using the hyperparameters for the tokeniser, parts of speech tagger & lemmatizer and the dependency parser as shown here: https://github.com/bnosac/udpipe/tree/master/inst/models-ud-2.5
  • Note that model training takes a while (8hours up to 3days) depending on the size of the treebank and your hyperparameter settings. This example was run on a Windows i5 CPU laptop with 1.7Ghz, so no GPU needed, which makes this model building process still accessible for anyone with a simple PC.
print(Sys.time())
m <- udpipe_train(file = "de_gsd-ud-2.6-20200924.udpipe",
                  files_conllu_training = "train.conllu",
                  files_conllu_holdout  = "dev.conllu",
                  annotation_tokenizer = list(dimension = 64, epochs = 100, segment_size=200, initialization_range = 0.1,
                                              batch_size = 50, learning_rate = 0.002, learning_rate_final=0, dropout = 0.1,
early_stopping = 1),
                  annotation_tagger = list(models = 2,
                                           templates_1 = "lemmatizer",
guesser_suffix_rules_1 = 8, guesser_enrich_dictionary_1 = 4,
guesser_prefixes_max_1 = 4,
                                           use_lemma_1 = 1,provide_lemma_1 = 1, use_xpostag_1 = 0, provide_xpostag_1 = 0,
                                           use_feats_1 = 0, provide_feats_1 = 0, prune_features_1 = 1,
                                           templates_2 = "tagger",
guesser_suffix_rules_2 = 8, guesser_enrich_dictionary_2 = 4,
guesser_prefixes_max_2 = 0,
                                           use_lemma_2 = 1, provide_lemma_2 = 0, use_xpostag_2 = 1, provide_xpostag_2 = 1,
                                           use_feats_2 = 1, provide_feats_2 = 1, prune_features_2 = 1),
                  annotation_parser = list(iterations = 30,
embedding_upostag = 20, embedding_feats = 20, embedding_xpostag = 0,
                                           embedding_form = 50, embedding_form_file = "wordvectors.vec",
                                           embedding_lemma = 0, embedding_deprel = 20, learning_rate = 0.01,
                                           learning_rate_final = 0.001, l2 = 0.5, hidden_layer = 200,
                                           batch_size = 10, transition_system = "projective", transition_oracle = "dynamic",
                                           structured_interval = 8))
print(Sys.time())

You can see the logs of this run here. Now your model is ready, you can use it on your own terms and you can start using it to annotate your text.

model <- udpipe_load_model("de_gsd-ud-2.6-20200924.udpipe")
texts <- data.frame(doc_id = c("doc1", "doc2"), text = c("Die Wissenschaft ist das beste, was wir haben.", "Von dort war Kraftstoff in das Erdreich gesickert."), stringsAsFactors = FALSE)
anno <- udpipe(texts, model, trace = 10)
View(anno)

udpipe parser table

Enjoy!

Thanks to Slav Petrov, Wolfgang Seeker, Ryan McDonald, Joakim Nivre, Daniel Zeman, Adriane Boyd for creating and distributing the UD_German-GSD treebank and to the UDPipe authors in particular Milan Straka.

finding contour lines

Finally, the R package you all have been waiting for has arrived - image.ContourDetector developed at https://github.com/bnosac/image. It detects contour lines in images alongside the 'Unsupervised Smooth Contour Detection' algorithm available at http://www.ipol.im/pub/art/2016/175.

Have you always wanted to be able to draw like you are in art school? Let me show how to quickly do this.

example contourlines

If you want to reproduce this, the following snippets show how. Steps are as follows

1. Install the packages from CRAN

install.packages("image.ContourDetector")
install.packages("magick")
install.packages("sp")

2. Get an image, put it into grey scale, pass the pixels to the function an off you go.

library(magick)
library(image.ContourDetector)
library(sp)
img <- image_read("https://cdn.mos.cms.futurecdn.net/9sUwFGNJvviJks7jNQ7AWc-1200-80.jpg")
mat <- image_data(img, channels = "gray")
mat <- as.integer(mat, transpose = TRUE)
mat <- drop(mat)
contourlines <- image_contour_detector(mat)
plt <- plot(contourlines)
class(plt)

example contourlines linesonly

3. If you want to have the same image as shown at the top of the article:

Put the 3 images (original, combined, contour lines only) together in 1 plot using the excellent magick R package:

plt <- image_graph(width = image_info(img)$width, height = image_info(img)$height)
plot(contourlines)
dev.off()
plt_combined <- image_graph(width = image_info(img)$width, height = image_info(img)$height)
plot(img)
plot(contourlines, add = TRUE, col = "red", lwd = 5)
dev.off()
combi <- image_append(c(img, plt_combined, plt))
combi
image_write(combi, "example-contourlines.png", format = "png")