Skip to content
This repository was archived by the owner on Dec 29, 2022. It is now read-only.
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 17 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Pretty Tensor - Fluent Neural Networks in TensorFlow

Pretty Tensor provides a high level builder API for TensorFlow. It provides
Pretty Tensor is a high level builder API for TensorFlow. It provides
thin wrappers on Tensors so that you can easily build multi-layer neural
networks.

Pretty Tensor provides a set of objects that behave likes Tensors, but also
Pretty Tensor provides a set of objects that behave like Tensors, but also
support a chainable object syntax to quickly define neural networks
and other layered architectures in TensorFlow.

Expand All @@ -14,17 +14,15 @@ and other layered architectures in TensorFlow.
.fully_connected(10, activation_fn=None)
.softmax(labels, name=softmax_name))

Please look here for full documentation of the PrettyTensor object for all
available operations:
[Available Operations](docs/PrettyTensor.md) or you can check out the [complete
documentation](docs/pretty_tensor_top_level.md)
Please see [here](docs/PrettyTensor.md) for full documentation of the PrettyTensor object for all
available operations. You can also check out the [complete documentation](docs/pretty_tensor_top_level.md).

See the tutorial directory for samples:
[tutorial/](prettytensor/tutorial/)

## Installation

The easiest installation is just to use pip:
The easiest way to install is just to use pip:

1. Follow the instructions at
[tensorflow.org](https://www.tensorflow.org/versions/master/get_started/os_setup.html#pip_install)
Expand Down Expand Up @@ -72,28 +70,28 @@ The easiest installation is just to use pip:

#### Full power of TensorFlow is easy to use

Pretty Tensors can be used (almost) everywhere that a tensor can. Just call
Pretty Tensors can be used (almost) everywhere that a tensor can. Just call
`pt.wrap` to make a tensor pretty.

You can also add any existing TensorFlow function to the chain using `apply`.
`apply` applies the current Tensor as the first argument and takes all the other
arguments as normal.

*Note:* because apply is so generic, Pretty Tensor doesn't try to wrap the
*Note:* Because apply is so generic, Pretty Tensor doesn't try to wrap the
world.

#### Plays well with other libraries

It also uses standard TensorFlow idioms so that it plays well with other
libraries, this means that you can use it a little bit in a model or throughout.
libraries. This means that you can use it a little bit in a model or throughout.
Just make sure to run the update_ops on each training set
(see [with_update_ops](docs/pretty_tensor_top_level.md#with_update_ops)).

### Terse

You've already seen how a Pretty Tensor is chainable and you may have noticed
that it takes care of handling the input shape. One other feature worth noting
are defaults. Using defaults you can specify reused values in a single place
that it takes care of handling the input shape. One other feature worth noting
is defaults. Using defaults you can specify reused values in a single place
without having to repeat yourself.

with pt.defaults_scope(activation_fn=tf.nn.relu):
Expand Down Expand Up @@ -142,14 +140,14 @@ There are also some convenient shorthands for LSTMs and GRUs:
### Extensible

You can call any existing operation by using `apply` and it will simply
subsitute the current tensor for the first argument.
substitute the current tensor for the first argument.

pretty_input.apply(tf.mul, 5)

You can also create a new operation There are two supported registration
You can also create a new operation. There are two supported registration
mechanisms to add your own functions. `@Register()` allows you to create a
method on PrettyTensor that operates on the Tensors and returns either a loss or
a new value. Name scoping and variable scoping are handled by the framework.
method on PrettyTensor that operates on the tensors and returns either a loss or
a new value. Name scoping and variable scoping is handled by the framework.

The following method adds the leaky_relu method to every Pretty Tensor:

Expand All @@ -158,19 +156,19 @@ The following method adds the leaky_relu method to every Pretty Tensor:
return tf.select(tf.greater(input_pt, 0.0), input_pt, 0.01 * input_pt)


`@RegisterCompoundOp()` is like adding a macro, it is designed to group together
`@RegisterCompoundOp()` is like adding a macro; it is designed to group together
common sets of operations.

### Safe variable reuse

Within a graph, you can reuse variables by using templates. A template is
Within a graph, you can reuse variables by using templates. A template is
just like a regular graph except that some variables are left unbound.

See more details in [PrettyTensor class](docs/PrettyTensor.md).

### Accessing Variables

Pretty Tensor uses the standard graph collections from TensorFlow to store variables. These can be accessed using `tf.get_collection(key)` with the following keys:
Pretty Tensor uses the standard graph collections from TensorFlow to store variables. These can be accessed using `tf.get_collection(key)` with the following keys:

* `tf.GraphKeys.VARIABLES`: all variables that should be saved (including some statistics).
* `tf.GraphKeys.TRAINABLE_VARIABLES: all variables that can be trained (including those before a `stop_gradients` call). These are what would typically be called *parameters* of the model in ML parlance.
Expand Down