Skip to content

Action of generic linear transformation f not in accord with f.matrix() #465

@Greg1950

Description

@Greg1950

This is related to #461. Herein I will use tensor notation, with indexes placed covariantly (subscript level) and contravariantly (superscript level); in contrast, GAlgebra writes all indexes at subscript level.

Attached is a zip file which contains a Jupyter notebook; a pdf of the notebook; and unofficial GAlgebra module gprinter.py, which is used by the notebook. I made some slight modifications to method matrix(self) in the most recent release of lt.py, which modifications are described at the start of the notebook. Comments below are made with the modified method implemented.

The modified method worked for all test cases on which I tried it. However testing revealed a new problem. Let's distinguish between a DESIRED linear transformation f, with matrix [ {f^i}_j ], and the ACTUAL transformation F, with matrix [ {f^I}_j ], yielded by instantiation. The matrices are defined by the actions of the transformations on basis vectors, specifically f(e_j) = \sum_{i=1}^n {f^i}_j e_i and F(e_j) = \sum_{i=1}^n {F^i}_j e_i. Specifically, take F = GA.lt('f') (the lower case "f" is intentional) to be a GENERIC transformation. Then F.matrix() returns a SymPy matrix [ {f^i}_j ] (note lowercase "f), not the actual matrix [ {F^i}_j ] of F. Entries {f^i}_jare SymPy symbols. Use the matrix [ {f^i}_j ] returned by F.matrix() to define linear transformation f, which we'll call the DESIRED transformation. f and F are the same if and only if f(e_j) = F(e_j) for each basis vector e_j, if and only if {f^i}_j = {f^i}_j for all i and j.

Investigation shows that instead one has {F^i}_j = \sum_{k=1}^n {f^i}_k g^{kj}, which is equivalent to F(e_j) = f(e^j). Notice that the free index j on the left side is at subscript level while on the right side it is at superscript level. Consequently the two transformations f and F will be equal only when the metric is Euclidean metric and the basis is orthonormal.

The above-described discrepancy between F and F.matrix() does not occur when F is a SPECIFIC linear transformation, i.e. is instantiated by way of a command of the form F = GA.lt(a_list_of_lists).

At a guess, that the problem should manifest for generic transformations but not for specific ones might have its source in different instantiation processes for specific and generic transformations.

I think I've accurately described the problem, but my meager coding skills aren't up to identifying where and how in lt.py the problem arises. At a guess it's in the code for the instantiation of GENERIC linear transformations.

Greg Grunberg (Greg1950)

GAlgebra's matrix() method.zip

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions