Summary: picoscope 5000a sigGen code does not work.
In the create custom AWG section of the demo code to use the picoscope sig-gen (line 124 on:
https://github.com/picotech/picosdk-pyt ... aSigGen.py File on GitHub) the default max and min values of the pico AWG buffer and its length are all set to zero with types unit16 and uint32. The programming manual says that these values vary by device.
I know in the labview sdk code for my device the sequences support -1 to 1 as float for the buffer of up to length 48k samples. However, because the data types passed to the "ps.ps5000aSigGenArbitraryMinMaxValues" function are unsigned, I would expect that this would be different when using the python wrappers. Yet, the code linked on github does not even seem handle this right.
When you run the template code, the output voltage is just noise (or if you change the dc bias, dc biased noise). This is logical because the
included example sets the max and min buffer values to zero and the max buffer length to zero...
I don't see how the included example code is reasonable and I have not been able to locate any documentation on how my device settings or how to use this function properly. You cannot have a defined AWG sequence if the max and min values are zero... Additionally, it doesn't seem like these values reference a struct based on the documentation and the fact that analog out is usually uint16. The C++ sdk example for my unit also uses zeros for each of these values so it was clearly intentional, but it doesn't work.
Please advise, I am using a 5444B scope.