Skip to content

JackOddy/web-crawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Webcrawler written in Golang

Crawls a single domain, printing out a list of assets and links for each new page that it finds. Will include external links in the print out, but will not crawl them.

Crawl something

Clone the repo into $GOPATH/src and do the following:

get deps:

  # inside the repo
  $ go get 

build:

  # inside the repo
  $ go build

crawl:

  # inside the repo
  $ ./web-crawler -u <url>

Make sure to include the protocol in your url, e.g. http://

example:

  # inside the repo
  $ ./web-crawler -u http://tomblomfield.com

Test

Clone the repo and do the following:

get deps:

  # inside the repo
  $ go get 

run the tests

  # inside the repo
  $ go test 

A test server will be automatically spun up and torn down for the tests.

About

A Webcrawler, written in Go

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors