After upgrading to 6.5 performance is worst

I upgraded my dev server this morning from 6.0 to 6.5 on a Ubuntu 16 server. The upgrade went fine but the performance is horrible. Query’s that took ms now take a long time. I have an Index on _type and when i go and run the query from the web gui it takes forever

the simple test query is

select meta().id, name from Contacts where _type = "tract_info"

Below is the Plan Text Output, Is there anything i need to change ?

> {
>   "#operator": "Sequence",
>   "#stats": {
>     "#phaseSwitches": 1,
>     "execTime": "5.479µs"
>   },
>   "~children": [
>     {
>       "#operator": "Authorize",
>       "#stats": {
>         "#phaseSwitches": 3,
>         "execTime": "6.543µs",
>         "servTime": "2.588892ms"
>       },
>       "privileges": {
>         "List": [
>           {
>             "Target": "default:Contacts",
>             "Priv": 7
>           }
>         ]
>       },
>       "~child": {
>         "#operator": "Sequence",
>         "#stats": {
>           "#phaseSwitches": 1,
>           "execTime": "6.324µs"
>         },
>         "~children": [
>           {
>             "#operator": "IndexScan3",
>             "#stats": {
>               "#itemsOut": 13,
>               "#phaseSwitches": 55,
>               "execTime": "54.786µs",
>               "kernTime": "7.477µs",
>               "servTime": "1.186231ms"
>             },
>             "index": "Ottoman__type",
>             "index_id": "f9d810b05a296595",
>             "index_projection": {
>               "primary_key": true
>             },
>             "keyspace": "Contacts",
>             "namespace": "default",
>             "spans": [
>               {
>                 "exact": true,
>                 "range": [
>                   {
>                     "high": "\"tract_info\"",
>                     "inclusion": 3,
>                     "low": "\"tract_info\""
>                   }
>                 ]
>               }
>             ],
>             "using": "gsi",
>             "#time_normal": "00:00.001",
>             "#time_absolute": 0.001241017
>           },
>           {
>             "#operator": "Fetch",
>             "#stats": {
>               "#itemsIn": 13,
>               "#itemsOut": 13,
>               "#phaseSwitches": 57,
>               "execTime": "73.358µs",
>               "kernTime": "1.276431ms",
>               "servTime": "19.471677798s"
>             },
>             "keyspace": "Contacts",
>             "namespace": "default",
>             "#time_normal": "00:19.471",
>             "#time_absolute": 19.471751156000003
>           },
>           {
>             "#operator": "Sequence",
>             "#stats": {
>               "#phaseSwitches": 1,
>               "execTime": "2.984µs"
>             },
>             "~children": [
>               {
>                 "#operator": "Filter",
>                 "#stats": {
>                   "#itemsIn": 13,
>                   "#itemsOut": 13,
>                   "#phaseSwitches": 55,
>                   "execTime": "246.846µs",
>                   "kernTime": "19.473074896s"
>                 },
>                 "condition": "((`Contacts`.`_type`) = \"tract_info\")",
>                 "#time_normal": "00:00.000",
>                 "#time_absolute": 0.000246846
>               },
>               {
>                 "#operator": "InitialProject",
>                 "#stats": {
>                   "#itemsIn": 13,
>                   "#itemsOut": 13,
>                   "#phaseSwitches": 44,
>                   "execTime": "269.816µs",
>                   "kernTime": "19.473298398s"
>                 },
>                 "result_terms": [
>                   {
>                     "expr": "(meta(`Contacts`).`id`)"
>                   },
>                   {
>                     "expr": "(`Contacts`.`name`)"
>                   }
>                 ],
>                 "#time_normal": "00:00.000",
>                 "#time_absolute": 0.000269816
>               },
>               {
>                 "#operator": "FinalProject",
>                 "#stats": {
>                   "#itemsIn": 13,
>                   "#itemsOut": 13,
>                   "#phaseSwitches": 40,
>                   "execTime": "21µs",
>                   "kernTime": "213.444µs"
>                 },
>                 "#time_normal": "00:00",
>                 "#time_absolute": 0
>               }
>             ],
>             "#time_normal": "00:00.000",
>             "#time_absolute": 0.000002984
>           }
>         ],
>         "#time_normal": "00:00.000",
>         "#time_absolute": 0.000006324
>       },
>       "#time_normal": "00:00.002",
>       "#time_absolute": 0.0025954349999999997
>     },
>     {
>       "#operator": "Stream",
>       "#stats": {
>         "#itemsIn": 13,
>         "#itemsOut": 13,
>         "#phaseSwitches": 29,
>         "execTime": "196.155µs",
>         "kernTime": "19.476264514s"
>       },
>       "#time_normal": "00:00.000",
>       "#time_absolute": 0.000196155
>     }
>   ],
>   "~versions": [
>     "2.0.0-N1QL",
>     "6.5.0-4960-enterprise"
>   ],
>   "#time_normal": "00:00.000",
>   "#time_absolute": 0.000005479
> }

Fetch of 13 keys took 19secs this unusual check any errors in the logs or network issues.

 {
>             "#operator": "Fetch",
>             "#stats": {
>               "#itemsIn": 13,
>               "#itemsOut": 13,
>               "#phaseSwitches": 57,
>               "execTime": "73.358µs",
>               "kernTime": "1.276431ms",
>               "servTime": "19.471677798s"
>             }

Another interesting thing is this is what i see in the web console under my buckets

It looks like something went wrong. Check logs for errors.

Which logs should i check ? And where would they be located on Ubuntu. I saw after the upgrade it run a process against the buckets on console to i guess update them but did not show any errors in the console. Also if i go into Documents for any of the buckets t shows the docs in the bucket so i am not sure how the system determines there are no docs in bucket

here is a screenshot of the buckets screen which clearly shows docs in the buckets.

Unless you check logs, will not know much what is happening.

https://docs.couchbase.com/server/current/manage/manage-logging/manage-logging.html#log-file-locations
You can check *query.log

Query Service not able to get the documents from DataNode. That is why Fetch/Infer having issues.

You can kill cbq-engine process and it will automatically start and see if problem still persist.

Yes killing the process has done the job somewhat, i now have a samples

looking at the log i see a few strange things

2020-01-22T10:18:43.975-08:00 [Info] Planner::processCreateToken: There is no create token to process for node 127.0.0.1:8091
2020-01-22T10:18:44.176-08:00 [Info] Planner::processDropInstanceToken: There is no drop instance token to process for node 127.0.0.1:8091
2020-01-22T10:18:44.709-08:00 [Warn] There are more equivalent index than available nodes. Allow equivalent index of (Contacts, adv_type) to be replaced on same node.
2020-01-22T10:18:44.709-08:00 [Warn] Definition 2771612407295802114 Instance 2900815810379372199 ReplicaId 0 partitionId 0 count 2 numLiveNode 1.
2020-01-22T10:18:44.710-08:00 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=1. Elapsed Time=18us
2020-01-22T10:18:44.710-08:00 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=2. Elapsed Time=28us
2020-01-22T10:18:44.710-08:00 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=3. Elapsed Time=9us
2020-01-22T10:18:44.710-08:00 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=4. Elapsed Time=8us
2020-01-22T10:18:44.710-08:00 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=5. Elapsed Time=8us
2020-01-22T10:18:44.710-08:00 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=6. Elapsed Time=8us
2020-01-22T10:18:44.710-08:00 [Warn] Unable to find a solution with rersource costraint. Relax resource constraint check.

These messages nothing to do with select those might be internal details during index placement if not enough resources it might retrying different node and doesn’t impact query.

Can you elaborate more on this. Are u still have issues.