0

My desire is conceptually simple, I have a file (really a PCIe resource file from /sys/bus/pci/device/.... but that isn't too relevant) on the host that I want to make available somewhere in guest memory, so that changes from either side get reflected to each other. Since my goal was to actually map a limited segment of PCIe address space in the host, I couldn't productively map the entire guest RAM. The base command that I am trying to add is listed below. The goal is to get memory id "bar0.ram" mapped somewhere in guest memory.

qemu-system-ppc -M ppce500 -cpu e500 -m 64M  -d guest_errors,unimp -bios $PWD/test.elf  -s -object memory-backend-file,size=1m,id=bar0.ram,mem-path=/sys/bus/pci/devices/0000\:04\:00.0/resource0,share=on  -monitor telnet:127.0.0.1:4999,server,nowait -nographic

Perhaps this would be easier on ARM or x86, but PPC doesn't offer persistent memory, nvram, multiple memory slots backed by different files, or similar tricks (that I could figure out how to get working). It does offer ivshmem, but I was unable to figure out how to get that to be transparently mapped into the guest address space.

Vaguely useful/related resources:

1 Answer 1

0

I have to hope this was absolutely the wrong way to solve the problem that I had, but it worked: I hacked QEMU in stupid ways.

In hw/ppc/e500.c, I created a global variable, and when that variable was set, after the initial register memory, I added another subregion based on whatever was in the global variable. Yes, you see the address where I stuck the memory hardcoded.

@@ -893,6 +893,8 @@ static void ppce500_power_off(void *opaque, int line, int on)
   }
 }

+MemoryRegion *magicbar0 = NULL;
+
 void ppce500_init(MachineState *machine)
 {
     MemoryRegion *address_space_mem = get_system_memory();
@@ -985,6 +987,12 @@ void ppce500_init(MachineState *machine)

     /* Register Memory */
     memory_region_add_subregion(address_space_mem, 0, machine->ram);
+    {
+        if (magicbar0)
+        {
+            memory_region_add_subregion(address_space_mem, 0x8000000, magicbar0);
+        }
+    }

     dev = qdev_new("e500-ccsr");
     object_property_add_child(OBJECT(machine), "e500-ccsr", OBJECT(dev));

Also, in softmmu/memory.c, I looked for my magic region name and stored it into the global variable. Ugly, yes. I pounded my head against the QOM for far too long before I just gave up and used the global variable.

@@ -1618,6 +1618,9 @@ void memory_region_init_ram_from_file(MemoryRegion *mr,
         object_unparent(OBJECT(mr));
         error_propagate(errp, err);
     }
+    extern MemoryRegion *magicbar0;
+    if (!strcmp(name, "bar0.ram"))
+        magicbar0 = mr;
 }

 void memory_region_init_ram_from_fd(MemoryRegion *mr,

Things that I did which possibly were unneeded:

  • I had the host RAM cover the entire range of memory, including the bar I was trying to map, so that the MMU and friends would play nice. Fortunately, the second mapping was able to overwrite the first mapping for just that segment.
  • I doubled the amount of RAM I allocated--at one point it was only using half of memory.
  • I had the guest disable the data cache (I tried anyway)
  • This was a bare metal program, so I created an all-RAM TLB cache to let me access address space above the default mapping of only part of RAM.

The command I ran with this included:

qemu-system-ppc -M ppce500,memory-backend=foo.ram -cpu e500 -m 256M,slots=2,maxmem=1g  -d guest_errors,unimp -bios $PWD/test.elf  -s -object memory-backend-file,size=256m,id=foo.ram,mem-path=$PWD/realmemory,share=on,prealloc=on -object memory-backend-file,size=1m,id=bar0.ram,mem-path=/sys/bus/pci/devices/0000\:04\:00.0/resource0,share=on  -monitor telnet:127.0.0.1:4999,server,nowait -nographic -S

After I did all of that, with the above command info mtree showed the desired result:

0000000000000000-000000000fffffff (prio 0, ram): foo.ram
0000000008000000-00000000080fffff (prio 0, ram): bar0.ram

Even better, mucking with memory at address 0x800_0000 and friends successfully read and wrote a PCIe card behind the host, without the guest having a PCIe driver for that card (or an operating system).

Just to document this (for myself I guess), I used the following assembler in the guest to disable the data cache:

#define CONFIG_SYS_HID0_FINAL (HID0_ICE | HID0_ABE | HID0_EMCP)
    lis     r3, CONFIG_SYS_HID0_FINAL@h
    ori     r3, r3, CONFIG_SYS_HID0_FINAL@l
    SYNC
    mtspr   SPRN_HID0, r3

And I use the following C in the guest to add a TLB entry (yes, hardcoded ram size):

set_tlb(1,
          0x0000000,
          0x0000000,
          MAS3_UR | MAS3_UW | MAS3_UX | MAS3_SR | MAS3_SW | MAS3_SX,
          0,
          0,
          2,
          BOOKE_PAGESZ_256M,
          0);

For anyone who has gotten this far, I guess we can all hope that someone will come along and provide an answer that will work with QEMU's APIs, instead of what I ended up doing.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.