New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 778699 link

Starred by 1 user

Issue metadata

Status: Fixed
Owner:
User never visited
Closed: Feb 2018
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Linux , Windows
Pri: 2
Type: Feature



Sign in to add a comment

Add 40 linux and 20 windows bots to Dart.LUCI pool

Project Member Reported by whesse@google.com, Oct 26 2017

Issue description

Please allocate 40 linux and 20 windows bots in the dart-ci GCE project, with the images there (the same linux and windows settings as the existing bots in the Dart.LUCI pool - n1-standard-8), and add them to the pool.

We also want stable Chrome (google-chrome-stable) and Firefox on all bots in the Dart.LUCI pool, for all OSs (Linux, Windows, and Macos), but I will file a separate issue to track the status of that configuration request.

Once these machines are up and running comfortably, then I will put in a request to deallocate the pool machines that aren't billed to dart-ci, and reallocate them in the dart-ci project.
 

Comment 1 by s...@google.com, Oct 27 2017

Cc: -smut@chromium.org
Labels: -Type-Bug Restrict-View-Google OS-Linux OS-Windows Type-Feature
Owner: smut@chromium.org
Status: Started (was: Untriaged)

Comment 2 by s...@google.com, Oct 27 2017

Labels: -Restrict-View-Google

Comment 3 by whesse@google.com, Oct 31 2017

What is the status on this?  We are ready to start up our LUCI CI builders, once we have the machines?

We would also like to learn to do more of this ourselves, and learn all the systems involved.  Are these just started as project instances, or is there a manager that starts and names them?  Then I know there is a puppet configuration for them, and then the configuration in our swarming pool, which we have done in the past.  I wonder what other steps there are.

Comment 4 by s...@google.com, Nov 1 2017

Was out of office yesterday.

Still working on making Puppet work with the new VMs. Currently only our GCE project is recognized, so I'm working on making it recognize dart-ci as well.
Project Member

Comment 5 by bugdroid1@chromium.org, Nov 1 2017

The following revision refers to this bug:
  https://chrome-internal.googlesource.com/infradata/config/+/ef5d3dc3ee133d6fedc7b0b3dadeb92e61056460

commit ef5d3dc3ee133d6fedc7b0b3dadeb92e61056460
Author: smut <smut@google.com>
Date: Wed Nov 01 20:47:09 2017

Comment 6 by s...@google.com, Nov 2 2017

I think we managed to get Puppet working, but it looks like you only have 8 IP addresses:
https://pantheon.corp.google.com/iam-admin/quotas?project=dart-ci&service=compute.googleapis.com&metric=In-use%20IP%20addresses&usage=USED

Please request at least 60 IP addresses-- each VM will need its own public IP. I'll also need to know the full list of IP addresses so we can whitelist them for access to our Puppet master.

Comment 7 by s...@google.com, Nov 2 2017

Er, actually all your quota is too low. For example you only have 24 CPUs in us-central1, which is enough for 3 8-core VMs. For 40 Linux and 20 Windows you will need:
1. >= 60 IP addresses
2. >= 480 CPUs
3. >= 18,000 GB persistent disk

All three of which need to be in us-central1 specifically.

Comment 8 by whesse@google.com, Nov 6 2017

I have made a request for twice that amount.  I requested static IP addresses, since I did not know whether this is needed or not.  When I get the request back, I'll follow up.

Google Compute Engine API

A request (ID:500f200001AdraKAAR) has been made for the following quotas
CPUs - us-central1
CPUs (all regions)
Static IP addresses - us-central1
In-use IP addresses - us-central1
Static IP addresses global
In-use IP addresses global
Persistent Disk Standard (GB) - us-central1

Comment 9 by whesse@google.com, Nov 6 2017

Quota is now approved:
Hello,

Your quota request for project '410721018617' has been approved and your quota has been adjusted accordingly.

The following quotas were increased:
+---------------------+-----------+----------------+-------------+------------------+------------------+
| Region: us-central1 |    CPUS   | DISKS_TOTAL_GB |  INSTANCES  | IN_USE_ADDRESSES | STATIC_ADDRESSES |
+---------------------+-----------+----------------+-------------+------------------+------------------+
|       Changes       | 24 -> 960 | 4096 -> 36000  | 240 -> 9600 |     8 -> 120     |     8 -> 120     |
+---------------------+-----------+----------------+-------------+------------------+------------------+

+------------------+------------------+------------------+------------------+
| GLOBAL Attribute | CPUS_ALL_REGIONS | IN_USE_ADDRESSES | STATIC_ADDRESSES |
+------------------+------------------+------------------+------------------+
|     Changes      |    64 -> 960     |     8 -> 120     |     8 -> 120     |
+------------------+------------------+------------------+------------------+


Please visit https://console.cloud.google.com/iam-admin/quotas?project=410721018617&service=compute.googleapis.com to review your updated quota.

Happy Computing!

Comment 10 by whesse@google.com, Nov 7 2017

I see that the 40 linux machines have now been created in our GCE project.  They have an empty /b directory at this point.  Should I add them to the Dart.LUCI pool config, or is that done automatically?  Is there any step I need to take, or will they show up in the Dart.LUCI pool after chrome-infra does all its work?

If these don't get Chrome installed, through the puppet configuration that should add Chrome to all machines in the pool, then I will add it manually.

Comment 11 by s...@google.com, Nov 8 2017

Did they tell you the 120 IP addresses you got? I tried to list them but it's not working:
$ CLOUDSDK_CORE_PROJECT=dart-ci gcloud compute addresses list
Listed 0 items.

I need to know the IP addresses to whitelist the VMs for access to the Puppet master, which will install the Machine Provider agent, which will create /b and connect to Swarming.

Comment 12 by whesse@google.com, Nov 9 2017

I believe that this issue is blocked on https://buganizer.corp.google.com/issues/67646308
which asks for a /22 block of IPs which will be used by machine provider to allocate the requested GCE instances.

I think that machine provider will be creating the instances, not GCE's instance groups mechanism.

Comment 13 by s...@google.com, Nov 9 2017

GCE creates the instances from the managed instance group, not MP. MP only creates the template and the managed instance group. Anyways, yeah we need to wait for that bug to provide a block of IP addresses that GCE can automatically assign to new instances.

Comment 14 by s...@google.com, Nov 9 2017

Status: ExternalDependency (was: Started)
Project Member

Comment 15 by bugdroid1@chromium.org, Feb 15 2018

The following revision refers to this bug:
  https://chrome-internal.googlesource.com/infradata/config/+/ede5eb111e83efe7e515c85b0fa178b16200e4c9

commit ede5eb111e83efe7e515c85b0fa178b16200e4c9
Author: smut <smut@google.com>
Date: Thu Feb 15 01:26:23 2018

Comment 16 by s...@google.com, Feb 15 2018

Status: Fixed (was: ExternalDependency)

Sign in to add a comment