New school semester, I'm gonna reload my machine for AMD #Xilinx #Vitis #Vivado so if I'm stretching out from Fedora for a bit do I pick Ubuntu LTS or Redhat 9.1 ? I'm not a fan of their latest shenanigans, but i'm just looking for a school tool now. Hoping I have good ARC 750 support. I tried ubuntu 23 but my custom partition config it missed the permissions on /home so got read only errors which I think I fixed. Not sold yet. Opinions?
Is #Xilinx still an awesome company to work for? Last I heard (a decade ago probably) they had basically zero attrition across the board.
I made my #Xilinx PCIe BAR space too small in that address bits I thought would come out of the core were not, and where instead accessing a completely different interface.
AMD’s David McAfee outlined the future of AI accelerators in the Ryzen lineup. According to McAfee software and user experience are key for the success of AI in general: https://www.pcworld.com/article/1815008/why-amd-thinks-ryzen-xdna-ai-is-the-future.html
Nothing like have a generic name collision. It would be one thing if the generics had inverse effects across the catalog of logic. It's something else if the generics have different types. I'm not sure which is worse.
PSA: #Xilinx sometimes uses SIM_MODE in their IP. It is not recommended to use the same generic in one's own #VHDL.
PS PSA: when defining a generic in #VHDL, it is best to use std_logic, bit, or std_logic_vector to support VHDL<->SystemVerilog interaction.
Mapping #Xilinx stacked AXI4 busses onto indexed records is making me go cross-eyed.
I'm instantiating the AXI4 Crossbar IP, but instead of individual buses for each initiator, it stacks all the bits for every initiator into a single-named port. Ex:
wdata(255 downto 0) -- Initiator 0
wdata(511 downto 256) -- Initiator 1
and ALL of the AXI4 signals are stacked like this.
So I'm writing a wrapper to put them into their own unique records.
The #Xilinx UG576 topic on TCTRL2 in Table 3-7 is somewhat unclear as it relates bits 7:0 to the TXDATA bytes. This table is only considering one SERDES lane. Therefore if four lanes are used and each lane only uses 16bits, TXDATA would still use 63:0 but the TXCTRL2 bits being used are 25:24, 17:16, 9:8, and 1:0 for lanes 3, 2, 1, and 0 (respectively).
I was so confused why only lane 0 was driving k-characters when I thought all should be...
If you've gotten the zcu106 to work with an ADRV9002 the same way it does with a zcu102, let me know!
Trying to get this on the air in @OpenResearchIns Remote Labs for an #opensource #OFDM #transceiver project.
#opensource #ofdm #transceiver #xilinx #fpga
Ooo a JTAG debugger that's not $800
"Debugs all ARM microcontrollers with JTAG interface supported by OpenOCD"
So, there's a catch, it has to be ARM and it has to have whatever OpenOCD is, it won't work for every board out there, but I'm definitely grabbing one
https://www.digikey.com/en/products/detail/olimex-ltd/ARM-USB-TINY-H/3471388 #FPGA #xilinx
I get confused every time I setup a new GT tile in Vivado. I think it might be because the tile Y index number is reversed in the schematic to that in the IP configuration window. I wonder if that's because the die is a flip chip?
GTH224 is on the bottom (X0Y0) in the IP graphic on the left.
I contacted #Xilinx support about simulation struggles with the DMA Subsystem Bridge for PCIe and their response is "why are you trying to simulate the core, why don't you validate it with the drivers in hardware."
I mean, I understand that to some extent, which is the path I am taking now. But also, don't come at me when the problem is the core doesn't act nice in a sim environment. This is how #FPGA development is done.
Solarflare card loses carrier seconds after reboot #networking #2004 #systemdnetworkd #xilinx
#networking #systemdnetworkd #xilinx
Schedule has demanded I obfuscate the full simulation of the PCIe network and DMA accesses for now.
Mentally shelving something I feel so close to getting to work is hard, but I have to get the design done first.
Context: I’ve been working on a design which utilizes the DMA Subsystem Bridge for PCIe from #Xilinx. It works in the lab, but I cannot get it to work in simulation. #FPGA
It seems as if the DMA Subsystem Bridge for PCIe #Xilinx IP Core doesn’t have a runable simulation for the example design that comes with the core. It’s possible I am not generating it correctly but when I run the script:
IPNAME_ex/IPNAME_ex.ip_user_files/sim_scripts/IPNAME/questa/IPNAME.sh
It only “simulates” the IP Core itself and none of the example design testbench located in:
IPNAME_ex/imports/
The #Xilinx DMA Subsystem Bridge for PCIe Core will halve the AXI4 clock frequency if the maximum link speed is reduced from 8GT/s to 5GT/s. Likely indicating that the 250MHz AXI4 clock for the 8GT/s is probably higher than it needs to be but the MMCM cannot math a closer clock frequency.
The alternative is the 5GT/s AXI4 clock is lower than it should be which wouldn’t make any sense. #FPGA