How to join two tables in PySpark with two conditions in an optimal way

I have the following two tables in PySpark: Table A – dfA | ip_4 | ip | |—————|————–| | 10.10.10.25 | 168430105 | | 10.11.25.60 | 168499516 | And table B – dfB | net_cidr | net_ip_first_4 | net_ip_last_4 | net_ip_first | net_ip_last | |—————|—————-|—————-|————–|————-| | 10.10.10.0/24 | 10.10.10.0 | 10.10.10.255 | 168430080 | 168430335…

Details