extensions.rst 36.4 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
Extensions
==========
Here we detail concrete implementations of various XACC interfaces as well as any
input parameters they expose.

Optimizers
----------
XACC provides implementations for the ``Optimizer`` that delegate to NLOpt and MLPack. Here we demonstrate
the various ways to configure these optimizers for a number of different solver types.

In addition to the enumerated parameters below, all ``Optimizers`` expose an ``initial-parameters`` key
that must be a list or vector of doubles with size equal to the number of parameters. By default, ``[0.,0.,...,0.,0.]`` is used.

MLPack
++++++
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
| ``mlpack-optimizer``   | Optimizer Parameter    |                  Parameter Description                          | default | type   |
+========================+========================+=================================================================+=========+========+
|        adam            | mlpack-step-size       | Step size for each iteration.                                   | .5      | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-beta1           | Exponential decay rate for the first moment estimates.          | .7      | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-beta2           | Exponential decay rate for the weighted infinity norm estimates.| .999    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-max-iter        | Maximum number of iterations allowed                            | 500000  | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-tolerance       | Maximum absolute tolerance to terminate algorithm.              | 1e-4    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-eps             | Value used to initialize the mean squared gradient parameter.   | 1e-8    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|        l-bfgs          |        None            |                                                                 |         |        |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|        adagrad         | mlpack-step-size       | Step size for each iteration.                                   | .5      | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-max-iter        | Maximum number of iterations allowed                            | 500000  | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-tolerance       | Maximum absolute tolerance to terminate algorithm.              | 1e-4    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-eps             | Value used to initialize the mean squared gradient parameter.   | 1e-8    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|        adadelta        | mlpack-step-size       | Step size for each iteration.                                   | .5      | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-max-iter        | Maximum number of iterations allowed                            | 500000  | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-tolerance       | Maximum absolute tolerance to terminate algorithm.              | 1e-4    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-eps             | Value used to initialize the mean squared gradient parameter.   | 1e-8    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-rho             | Smoothing constant.                                             | .95     | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|        cmaes           | mlpack-cmaes-lambda    | The population size.                                            | 0       | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        |mlpack-cmaes-upper-bound| Upper bound of decision variables.                              | 10.     | duoble |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        |mlpack-cmaes-lower-bound| Lower bound of decision variables.                              | -10.0   | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-max-iter        | Maximum number of iterations allowed                            | 500000  | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-tolerance       | Maximum absolute tolerance to terminate algorithm.              | 1e-4    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|        gd              | mlpack-step-size       | Step size for each iteration.                                   | .5      | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-max-iter        | Maximum number of iterations allowed                            | 500000  | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-tolerance       | Maximum absolute tolerance to terminate algorithm.              | 1e-4    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|        momentum-sgd    | mlpack-step-size       | Step size for each iteration.                                   | .5      | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-max-iter        | Maximum number of iterations allowed                            | 500000  | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-tolerance       | Maximum absolute tolerance to terminate algorithm.              | 1e-4    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-momentum        | Maximum absolute tolerance to terminate algorithm.              | .05     | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|   momentum-nesterov    | mlpack-step-size       | Step size for each iteration.                                   | .5      | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-max-iter        | Maximum number of iterations allowed                            | 500000  | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-tolerance       | Maximum absolute tolerance to terminate algorithm.              | 1e-4    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-momentum        | Maximum absolute tolerance to terminate algorithm.              | .05     | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|        sgd             | mlpack-step-size       | Step size for each iteration.                                   | .5      | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-max-iter        | Maximum number of iterations allowed                            | 500000  | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-tolerance       | Maximum absolute tolerance to terminate algorithm.              | 1e-4    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|        rms-prop        | mlpack-step-size       | Step size for each iteration.                                   | .5      | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-max-iter        | Maximum number of iterations allowed                            | 500000  | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-tolerance       | Maximum absolute tolerance to terminate algorithm.              | 1e-4    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-alpha           | Smoothing constant                                              | .99     | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | mlpack-eps             | Value used to initialize the mean squared gradient parameter.   | 1e-8    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+

100
Various examples of using the mlpack optimizer:
101
102
103

.. code:: cpp

104
   // sgd with defaults
105
   auto optimizer = xacc::getOptimizer("mlpack", {std::make_pair("mlpack-optimizer", "sgd")});
106
107
108
109
110
111
112
   // default adam
   optimizer = xacc::getOptimizer("mlpack")
   // adagrad with 30 max iters and .01 step size
   auto optimizer = xacc::getOptimizer("mlpack", {std::make_pair("mlpack-optimizer", "adagrad"),
                                                  std::make_pair("mlpack-step-size", .01),
                                                  std::make_pair("mlpack-max-iter", 30)});

113
114
115
116
117
118

or in Python

.. code:: python

   optimizer = xacc.getOptimizer('mlpack', {'mlpack-optimizer':'sgd'})
119
120
121
122
123
124
   // default adam
   optimizer = xacc.getOptimizer("mlpack")
   // adagrad with 30 max iters and .01 step size
   optimizer = xacc.getOptimizer("mlpack", {'mlpack-optimizer':'adagrad',
                                            'mlpack-step-size':.01,
                                            'mlpack-max-iter':30})
125
126
NLOpt
+++++
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
| ``nlopt-optimizer``    | Optimizer Parameter    |                  Parameter Description                          | default | type   |
+========================+========================+=================================================================+=========+========+
|        cobyla          | nlopt-ftol             | Maximum absolute tolerance to terminate algorithm.              | 1e-6    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | nlopt-maxeval          | Maximum number of iterations allowed                            | 1000    | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|        l-bfgs          | nlopt-ftol             | Maximum absolute tolerance to terminate algorithm.              |   1e-6  | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | nlopt-maxeval          | Maximum number of iterations allowed                            | 1000    | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|      nelder-mead       | nlopt-ftol             | Maximum absolute tolerance to terminate algorithm.              | 1e-6    | double |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
|                        | nlopt-maxeval          | Maximum number of iterations allowed                            | 1000    | int    |
+------------------------+------------------------+-----------------------------------------------------------------+---------+--------+
142
143
144

Accelerators
------------
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
Here we detail all available XACC ``Accelerators`` and their exposed input parameters.

IBM
+++
The IBM Accelerator by default targets the remote ``ibmq_qasm_simulator``. You can point to a
different backend in two ways:

.. code:: cpp

   auto ibm_valencia = xacc::getAccelerator("ibm:ibmq_valencia");
   ... or ...
   auto ibm_valencia = xacc::getAccelerator("ibm", {std::make_pair("backend", "ibmq_valencia")});

in Python

.. code:: python

162
   ibm_valencia = xacc.getAccelerator('ibm:ibmq_valencia');
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
   ... or ...
   ibm_valencia = xacc.getAccelerator('ibm', {'backend':'ibmq_valencia')});

You can specify the number of shots in this way as well

.. code:: cpp

   auto ibm_valencia = xacc::getAccelerator("ibm:ibmq_valencia", {std::make_pair("shots", 2048)});

or in Python

.. code:: Python

   ibm_valencia = xacc.getAccelerator('ibm:ibmq_valencia', {'shots':2048)});

In order to target the remote backend (for ``initialize()`` or ``execute()``) you must provide
179
your IBM credentials to XACC. To do this add the following to a plain text file ``$HOME/.ibm_config``
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294

.. code:: bash

   key: YOUR_KEY_HERE
   url: https://q-console-api.mybluemix.net
   hub: HUB
   group: GROUP
   project: PROJECT

You can also create this file using the ``xacc`` Python module

.. code:: bash

   $ python3 -m xacc -c ibm -k YOUR_KEY --group GROUP --hub HUB --project PROJECT --url URL
   [ for public API ]
   $ python3 -m xacc -c ibm -k YOUR_KEY

where you provide URL, HUB, PROJECT, GROUP, and YOUR_KEY.

Aer
+++
The Aer Accelerator provides a great example of contributing plugins or extensions to core C++ XACC interfaces
from Python. To see how this is done, checkout the code `here <https://github.com/eclipse/xacc/blob/master/python/plugins/aer/aer_accelerator.py>`_.
This Accelerator connects the XACC IR infrastructure with the ``qiskit-aer`` simulator, providing a
robust simulator that can mimic noise models published by IBM backends. Note to use these noise models you must
have setup your ``$HOME/.ibm_config`` file (see above discussion on IBM Accelerator).

.. code:: python

   aer = xacc.getAccelerator('aer')
   ... or ...
   aer = xacc.getAccelerator('aer', {'shots':8192})
   ... or ...
   # For ibmq_johannesburg-like readout error
   aer = xacc.getAccelerator('aer', {'shots':2048, 'backend':'ibmq_johannesburg', 'readout_error':True})
   ... or ...
   # For all ibmq_johannesburg-like errors
   aer = xacc.getAccelerator('aer', {'shots':2048, 'backend':'ibmq_johannesburg',
                                    'readout_error':True,
                                    'thermal_relaxation':True,
                                    'gate_error':True})

You can also use this simulator from C++, just make sure you load the Python external language plugin.

.. code:: cpp

   xacc::Initialize();
   xacc::external::load_external_language_plugins();
   auto accelerator = xacc::getAccelerator("aer", {std::make_pair("shots", 8192),
                                                   std::make_pair("readout_error", true)});
   .. run simulation

   xacc::external::unload_external_language_plugins();
   xacc::Finalize();

QCS
+++
XACC provides support for the Rigetti QCS platform through the QCS Accelerator implementation. This
Accelerator requires a few extra third-party libraries that you will need to install in order
to get QCS support. Specifically we need ``libzmq``, ``cppzmq``, ``msgpack-c``, and ``uuid-dev``.
Note that more than likely this will only be built on the QCS Centos 7 VM, so the following
instructions are specifically for that OS.

.. code:: bash

   $ git clone https://github.com/zeromq/libzmq
   $ cd libzmq/ && mkdir build && cd build
   $ cmake .. -DCMAKE_INSTALL_PREFIX=~/.zmq
   $ make -j12 install

   $ cd ../..
   $ git clone https://github.com/zeromq/cppzmq
   $ cd cppzmq/ && mkdir build && cd build/
   $ cmake .. -DCMAKE_INSTALL_PREFIX=~/.zmq -DCMAKE_PREFIX_PATH=~/.zmq
   $ make -j12 install

   $ cd ../..
   $ git clone https://github.com/msgpack/msgpack-c/
   $ cd msgpack-c/ && mkdir build && cd build
   $ cmake .. -DCMAKE_INSTALL_PREFIX=~/.zmq
   $ make -j12 install
   $ cd ../..

   $ sudo yum install uuid-dev devtoolset-8-gcc devtoolset-8-gcc-c++
   $ scl enable devtoolset-8 -- bash

   [go to your xacc build directory]
   cmake .. -DUUID_LIBRARY=/usr/lib64/libuuid.so.1
   make install

There is no further configuration for using the QCS platform.

To use the QCS Accelerator targeting something like ``Aspen-4-2Q-A`` (for example, replace with your lattice):

.. code:: cpp

   auto qcs = xacc::getAccelerator("qcs:Aspen-4-2Q-A", {std::make_pair("shots", 10000)});

or in Python

.. code:: python

   qcs = xacc.getAccelerator('qcs:Aspen-4-2Q-A', {'shots':10000)});

For now you must manually map your ``CompositeInstruction`` to the correct physical bits
provided by your lattice. To do so, run

.. code:: python

   qpu = xacc.getAccelerator('qcs:Aspen-4-2Q-A')
   [given CompositeInstruction f]
   f.defaultPlacement(qpu)
   [or manually]
   f.mapBits([5,9])

295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
IonQ
++++
The IonQ Accelerator by default targets the remote ``simulator`` backend. You can point to the physical
QPU in two ways:

.. code:: cpp

   auto ionq = xacc::getAccelerator("ionq:qpu");
   ... or ...
   auto ionq = xacc::getAccelerator("ionq", {std::make_pair("backend", "qpu")});

in Python

.. code:: python

   ionq = xacc.getAccelerator('ionq:qpu');
   ... or ...
   ionq = xacc.getAccelerator('ionq', {'backend':'qpu')});

You can specify the number of shots in this way as well

.. code:: cpp

   auto ionq = xacc::getAccelerator("ionq", {std::make_pair("shots", 2048)});

or in Python

.. code:: Python

   ionq = xacc.getAccelerator('ionq', {'shots':2048)});

In order to target the simulator or QPU (for ``initialize()`` or ``execute()``) you must provide
your IonQ credentials to XACC. To do this add the following to a plain text file ``$HOME/.ionq_config``

.. code:: bash

   key: YOUR_KEY_HERE
   url: https://api.ionq.co/v0
333
334
335

DWave
+++++
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
The DWave Accelerator by default targets the remote ``DW_2000Q_VFYC_2_1`` backend. You can point to a
different backend in two ways:

.. code:: cpp

   auto dw = xacc::getAccelerator("dwave:DW_2000Q");
   ... or ...
   auto dw = xacc::getAccelerator("dwave", {std::make_pair("backend", "DW_2000Q")});

in Python

.. code:: python

   dw = xacc.getAccelerator('dwave:DW_2000Q');
   ... or ...
   dw = xacc.getAccelerator('dwave', {'backend':'DW_2000Q')});

You can specify the number of shots in this way as well

.. code:: cpp

   auto dw = xacc::getAccelerator("dwave", {std::make_pair("shots", 2048)});

or in Python

.. code:: Python

   dw = xacc.getAccelerator('dwave', {'shots':2048)});

In order to target the remote backend (for ``initialize()`` or ``execute()``) you must provide
your DWave credentials to XACC. To do this add the following to a plain text file ``$HOME/.dwave_config``

.. code:: bash

   key: YOUR_KEY_HERE
   url: https://cloud.dwavesys.com

You can also create this file using the ``xacc`` Python module

.. code:: bash

   $ python3 -m xacc -c dwave -k YOUR_KEY

where you provide YOUR_KEY.

DWave Neal
++++++++++
The DWave Neal Accelerator provides another example of contributing plugins or extensions to core C++ XACC interfaces
from Python. To see how this is done, checkout the code `here <https://github.com/eclipse/xacc/blob/master/python/plugins/dwave/dwave_neal_accelerator.py>`_.
This Accelerator connects the XACC IR infrastructure with the ``dwave-neal`` simulator, providing a local
simulator that can mimic DWave QPU execution.

.. code:: python

   aer = xacc.getAccelerator('dwave-neal')
   ... or ...
   aer = xacc.getAccelerator('dwave-neal', {'shots':2000})

You can also use this simulator from C++, just make sure you load the Python external language plugin.

.. code:: cpp

   xacc::Initialize();
   xacc::external::load_external_language_plugins();
   auto accelerator = xacc::getAccelerator("dwave-neal", {std::make_pair("shots", 8192)});
   .. run simulation

   xacc::external::unload_external_language_plugins();
   xacc::Finalize();
405

406
407
408

Algorithms
----------
Mccaskey, Alex's avatar
Mccaskey, Alex committed
409
410
411
XACC exposes hybrid quantum-classical Algorithm implementations for the variational quantum eigensolver (VQE), data-driven
circuit learning (DDCL), and chemistry reduced density matrix generation (RDM).

412
413
VQE
+++
Mccaskey, Alex's avatar
Mccaskey, Alex committed
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
The VQE Algorithm requires the following input information:

+------------------------+-----------------------------------------------------------------+--------------------------------------+
|  Algorithm Parameter   |                  Parameter Description                          |             type                     |
+========================+=================================================================+======================================+
|    observable          | The hermitian operator, vqe computes ground eigenvalue of this  | std::shared_ptr<Observable>          |
+------------------------+-----------------------------------------------------------------+--------------------------------------+
|    ansatz              | The unmeasured, parameterized quantum circuit                   | std::shared_ptr<CompositeInstruction>|
+------------------------+-----------------------------------------------------------------+--------------------------------------+
|    optimizer           | The classical optimizer to use                                  | std::shared_ptr<Optimizer>           |
+------------------------+-----------------------------------------------------------------+--------------------------------------+
|    accelerator         | The Accelerator backend to target                               | std::shared_ptr<Accelerator>         |
+------------------------+-----------------------------------------------------------------+--------------------------------------+

This Algorithm will add ``opt-val`` (``double``) and ``opt-params`` (``std::vector<double>``) to the provided ``AcceleratorBuffer``.
The results of the algorithm are therefore retrieved via these keys (see snippet below). Note you can
control the initial VQE parameters with the ``Optimizer`` ``initial-parameters`` key (by default all zeros).

.. code:: cpp

   #include "xacc.hpp"
   #include "xacc_observable.hpp"

   int main(int argc, char **argv) {
     xacc::Initialize(argc, argv);

     // Get reference to the Accelerator
     // specified by --accelerator argument
     auto accelerator = xacc::getAccelerator();

     // Create the N=2 deuteron Hamiltonian
     auto H_N_2 = xacc::quantum::getObservable(
         "pauli", std::string("5.907 - 2.1433 X0X1 "
                           "- 2.1433 Y0Y1"
                           "+ .21829 Z0 - 6.125 Z1"));

     auto optimizer = xacc::getOptimizer("nlopt",
                            {std::make_pair("initial-parameters", {.5})});

     // JIT map Quil QASM Ansatz to IR
     xacc::qasm(R"(
    .compiler xasm
    .circuit deuteron_ansatz
    .parameters theta
    .qbit q
    X(q[0]);
    Ry(q[1], theta);
    CNOT(q[1],q[0]);
    )");
    auto ansatz = xacc::getCompiled("deuteron_ansatz");

    // Get the VQE Algorithm and initialize it
    auto vqe = xacc::getAlgorithm("vqe");
    vqe->initialize({std::make_pair("ansatz", ansatz),
                   std::make_pair("observable", H_N_2),
                   std::make_pair("accelerator", accelerator),
                   std::make_pair("optimizer", optimizer)});

    // Allocate some qubits and execute
    auto buffer = xacc::qalloc(2);
    vqe->execute(buffer);

    auto ground_energy = (*buffer)["opt-val"].as<double>();
    auto params = (*buffer)["opt-params"].as<std::vector<double>>();
  }

In Python:

.. code:: python

   import xacc

   # Get access to the desired QPU and
   # allocate some qubits to run on
   qpu = xacc.getAccelerator('tnqvm')
   buffer = xacc.qalloc(2)

   # Construct the Hamiltonian as an XACC-VQE PauliOperator
   ham = xacc.getObservable('pauli', '5.907 - 2.1433 X0X1 - 2.1433 Y0Y1 + .21829 Z0 - 6.125 Z1')


   xacc.qasm('''.compiler xasm
   .circuit ansatz2
   .parameters t0
   .qbit q
   X(q[0]);
   Ry(q[1],t0);
   CX(q[1],q[0]);
   ''')
   ansatz2 = xacc.getCompiled('ansatz2')

   opt = xacc.getOptimizer('nlopt', {'initial-parameters':[.5]})

   # Create the VQE algorithm
   vqe = xacc.getAlgorithm('vqe', {
                        'ansatz': ansatz2,
                        'accelerator': qpu,
                        'observable': ham,
                        'optimizer': opt
                        })
   vqe.execute(buffer)
   energy = buffer['opt-val']
   params = buffer['opt-params']

518
519
520

DDCL
++++
Mccaskey, Alex's avatar
Mccaskey, Alex committed
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
The DDCL Algorithm implements the following algorithm - given a target probability distribution,
propose a parameterized quantum circuit and train (minimize loss) the circuit to reproduce
that given target distribution. We design DDCL to be extensible in loss function computation and
gradient computation strategies.

The DDCL Algorithm requires the following input information:

+------------------------+-----------------------------------------------------------------+--------------------------------------+
|  Algorithm Parameter   |                  Parameter Description                          |             type                     |
+========================+=================================================================+======================================+
|    target_dist         | The target probability distribution to reproduce                | std::vector<double>                  |
+------------------------+-----------------------------------------------------------------+--------------------------------------+
|    ansatz              | The unmeasured, parameterized quantum circuit                   | std::shared_ptr<CompositeInstruction>|
+------------------------+-----------------------------------------------------------------+--------------------------------------+
|    optimizer           | The classical optimizer to use, can be gradient based           | std::shared_ptr<Optimizer>           |
+------------------------+-----------------------------------------------------------------+--------------------------------------+
|    accelerator         | The Accelerator backend to target                               | std::shared_ptr<Accelerator>         |
+------------------------+-----------------------------------------------------------------+--------------------------------------+
|    loss                | The loss strategy to use                                        |          std::string                 |
+------------------------+-----------------------------------------------------------------+--------------------------------------+
|    gradient            | The gradient strategy to use                                    |  std::string                         |
+------------------------+-----------------------------------------------------------------+--------------------------------------+

As of this writing, loss can take ``js`` and ``mmd`` values for Jansen-Shannon divergence and Maximum Mean Discrepancy, respectively.
More are being added. Also, gradient can take ``js-parameter-shift`` and ``mmd-parameter-shift`` values. These gradient
strategies will shift each parameter by plus or minus pi over 2.

.. code:: cpp

   #include "xacc.hpp"

   int main(int argc, char **argv) {
     xacc::Initialize(argc, argv);

     xacc::external::load_external_language_plugins();
     xacc::set_verbose(true);

     // Get reference to the Accelerator
     auto accelerator = xacc::getAccelerator("aer");

     auto optimizer = xacc::getOptimizer("mlpack");
     xacc::qasm(R"(
    .compiler xasm
    .circuit qubit2_depth1
    .parameters x
    .qbit q
    U(q[0], x[0], -pi/2, pi/2 );
    U(q[0], 0, 0, x[1]);
    U(q[1], x[2], -pi/2, pi/2);
    U(q[1], 0, 0, x[3]);
    CNOT(q[0], q[1]);
    U(q[0], 0, 0, x[4]);
    U(q[0], x[5], -pi/2, pi/2);
    U(q[1], 0, 0, x[6]);
    U(q[1], x[7], -pi/2, pi/2);
    )");
     auto ansatz = xacc::getCompiled("qubit2_depth1");

     std::vector<double> target_distribution {.5, .5, .5, .5};

     auto ddcl = xacc::getAlgorithm("ddcl");
     ddcl->initialize({std::make_pair("ansatz", ansatz),
                   std::make_pair("target_dist", target_distribution),
                   std::make_pair("accelerator", accelerator),
                   std::make_pair("loss", "js"),
                   std::make_pair("gradient", "js-parameter-shift"),
                   std::make_pair("optimizer", optimizer)});

     // Allocate some qubits and execute
     auto buffer = xacc::qalloc(2);
     ddcl->execute(buffer);

     // Print the result
     std::cout << "Loss: " << buffer["opt-val"].as<double>()
            << "\n";

     xacc::external::unload_external_language_plugins();
     xacc::Finalize();
   }

or in Python

.. code:: python

   import xacc
   # Get the QPU and allocate a single qubit
   qpu = xacc.getAccelerator('aer')
   qbits = xacc.qalloc(1)

   # Get the MLPack Optimizer, default is Adam
   optimizer = xacc.getOptimizer('mlpack')

   # Create a simple quantum program
   xacc.qasm('''
   .compiler xasm
   .circuit foo
   .parameters x,y,z
   .qbit q
   Ry(q[0], x);
   Ry(q[0], y);
   Ry(q[0], z);
   ''')
   f = xacc.getCompiled('foo')

   # Get the DDCL Algorithm, initialize it
   # with necessary parameters
   ddcl = xacc.getAlgorithm('ddcl', {'ansatz': f,
                                  'accelerator': qpu,
                                  'target_dist': [.5,.5],
                                  'optimizer': optimizer,
                                  'loss': 'js',
                                  'gradient': 'js-parameter-shift'})
   # execute
   ddcl.execute(qbits)

   print(qbits.keys())
   print(qbits['opt-val'])
   print(qbits['opt-params'])

640

Mccaskey, Alex's avatar
Mccaskey, Alex committed
641

642
643
Accelerator Decorators
----------------------
644
645
ROErrorDecorator
++++++++++++++++
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
The ``ROErrorDecorator`` provides an ``AcceleratorDecorator`` implementation for affecting
readout error mitigation as in the `deuteron paper <https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.210501>`_.
It takes as input readout error probabilities ``p(0|1)`` and ``p(1|0)`` for all qubits and shifts expecation values
accordingly (see paper).

By default it will request the backend properties from the decorated ``Accelerator`` (``Accelerator::getProperties()``). This method
returns a ``HeterogeneousMap``. If this map contains a vector of doubles at keys ``p01s`` and ``p10s``, then these
values will be used in the readout error correction. Alternatively, if the backend does not provide this data,
users can provide a custom JSON file containing the probabilities. This file should be structured as such

.. code:: bash

   {
       "shots": 1024,
       "backend": "qcs:Aspen-2Q-A",
       "0": {
           "0|1": 0.0565185546875,
           "1|0": 0.0089111328125,
           "+": 0.0654296875,
           "-": 0.047607421875
       },
       "1": {
           "0|1": 0.095458984375,
           "1|0": 0.0115966796875,
           "+": 0.1070556640625,
           "-": 0.0838623046875
       }
   }

Automating readout error mitigation with this decorator can be done in the following way:

.. code:: python

   qpu = xacc.getAccelerator('ibm:ibmq_johannesburg', {'shots':1024})

   # Turn on readout error correction by decorating qpu
   qpu = xacc.getAcceleratorDecorator('ro-error', qpu)

   # Now use qpu as your Accelerator...
   # execution will be automatically readout
   # error corrected

Similarly, with a provided configuration file

.. code:: cpp

   auto qpu = xacc::getAccelerator("qcs:Aspen-2Q-A");
   qpu = xacc::getAcceleratorDecorator("ro-error", qpu, {std::make_pair("file", "probs.json")});
694
695
696
697
698
699
700
701
702
703


RDMPurificationDecorator
++++++++++++++++++++++++

ImprovedSamplingDecorator
+++++++++++++++++++++++++

IR Transformations
------------------
704

705
706
CircuitOptimizer
+++++++++++++++++
Mccaskey, Alex's avatar
Mccaskey, Alex committed
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
This ``IRTransformation`` of type ``Optimization`` will search the DAG representation
of a quantum circuit and remove all zero-rotations, hadamard and cnot pairs, and merge
adjacent common rotations (e.g. ``Rx(.1)Rx(.1) -> Rx(.2)``).

.. code:: python

   # Create a bell state program with too many cnots
   xacc.qasm('''
   .compiler xasm
   .circuit foo
   .qbit q
   H(q[0]);
   CX(q[0], q[1]);
   CX(q[0], q[1]);
   CX(q[0], q[1]);
   Measure(q[0]);
   Measure(q[1]);
   ''')
   f = xacc.getCompiled('foo')
   assert(6 == f.nInstructions())

   # Run the circuit-optimizer IRTransformation, can pass
   # accelerator (here None) and options (here empty dict())
   optimizer = xacc.getIRTransformation('circuit-optimizer')
   optimizer.apply(f, None, {})

   # should have 4 instructions, not 6
   assert(4 == f.nInstructions())

736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751

Observables
-----------

Psi4 Frozen-Core
++++++++++++++++
The ``psi4-frozen-core`` observable generates an fermionic
observable using Psi4 and based on a user provided dictionary of options.
To use this Observable, ensure you have Psi4 installed under the same
``python3`` used for the XACC Python API.

.. code:: bash

   $ git clone https://github.com/psi4/psi4 && cd psi4 && mkdir build && cd build
   $ cmake .. -DPYTHON_EXECUTABLE=$(which python3) -DCMAKE_INSTALL_PREFIX=$(python3 -m site --user-site)/psi4
   $ make -j8 install
Mccaskey, Alex's avatar
Mccaskey, Alex committed
752
   $ export PYTHONPATH=$(python3 -m site --user-site)/psi4/lib:$PYTHONPATH
753

Mccaskey, Alex's avatar
Mccaskey, Alex committed
754
This observable type takes a dictionary of options describing the
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
molecular geometry (key ``geometry``), the basis set (key ``basis``),
and the list of frozen (key ``frozen-spin-orbitals``) and active (key ``active-spin-orbitals``) spin
orbital lists.

With Psi4 and XACC installed, you can use the frozen-core
Observable in the following way in python.

.. code:: python

   import xacc

   geom = '''
   0 1
   Na  0.000000   0.0      0.0
   H   0.0        0.0  1.914388
   symmetry c1
   '''
   fo = [0, 1, 2, 3, 4, 10, 11, 12, 13, 14]
   ao = [5, 9, 15, 19]

   H = xacc.getObservable('psi4-frozen-core', {'basis': 'sto-3g',
                                       'geometry': geom,
                                       'frozen-spin-orbitals': fo,
                                       'active-spin-orbitals': ao})