How to join two tables in PySpark with two conditions in an optimal way

I have the following two tables in PySpark: Table A – dfA | ip_4 | ip | |—————|————–| | 10.10.10.25 | 168430105 | | 10.11.25.60 | 168499516 | And table B – dfB | net_cidr | net_ip_first_4 | net_ip_last_4 | net_ip_first | net_ip_last | |—————|—————-|—————-|————–|————-| | 10.10.10.0/24 | 10.10.10.0 | 10.10.10.255 | 168430080 | 168430335…

Details

Ansible Gather facts failing at findmnt command for some hosts

ANSIBLE VERSION ansible 2.4.6.0 config file = /home/xxxxxx/ansible.cfg configured module search path = [u’/home/xxxxxx/.ansible/plugins/modules’, u’/usr/share/ansible/plugins/modules’] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] CONFIGURATION cat ~/.ansible.cfg [defaults] host_key_checking = False forks = 5 log_path = /home/userid/ansible.log [ssh_connection] pipelining…

Details