<html><head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head><body bgcolor="#FFFFFF" text="#000000"><br>
This patch is a monster, but that's to be expected with MST, I suppose. 
:) It has some formatting issues (lines over 80 characters in length) 
but that can be cleaned up later (as far as I'm concerned). Otherwise I 
don't see anything glaring here, so...<br>
<br>
<span>Reviewed-by: Todd Previte <a class="moz-txt-link-rfc2396E" href="mailto:tprevite@gmail.com"><tprevite@gmail.com></a><br> 
</span><br>
<blockquote style="border: 0px none;" 
cite="mid:1400640904-16847-8-git-send-email-airlied@gmail.com" 
type="cite">
  <div style="margin:30px 25px 10px 25px;" class="__pbConvHr"><div 
style="display:table;width:100%;border-top:1px solid 
#EDEEF0;padding-top:5px">       <div 
style="display:table-cell;vertical-align:middle;padding-right:6px;"><img
 photoaddress="airlied@gmail.com" photoname="Dave Airlie" 
src="cid:part1.00040602.01010205@gmail.com" name="postbox-contact.jpg" 
height="25px" width="25px"></div>   <div 
style="display:table-cell;white-space:nowrap;vertical-align:middle;width:100%">
        <a moz-do-not-send="true" href="mailto:airlied@gmail.com" 
style="color:#737F92 
!important;padding-right:6px;font-weight:bold;text-decoration:none 
!important;">Dave Airlie</a></div>   <div 
style="display:table-cell;white-space:nowrap;vertical-align:middle;">   
  <font color="#9FA2A5"><span style="padding-left:6px">Tuesday, May 20, 
2014 7:55 PM</span></font></div></div></div>
  <div style="color:#888888;margin-left:24px;margin-right:24px;" 
__pbrmquotes="true" class="__pbConvBody"><div>From: Dave Airlie 
<a class="moz-txt-link-rfc2396E" href="mailto:airlied@redhat.com"><airlied@redhat.com></a><br><br>This is the initial import of the 
helper for displayport multistream.<br><br>It consists of a topology 
manager, init/destroy/set mst state<br><br>It supports DP 1.2 MST 
sideband msg protocol handler - via hpd irqs<br><br>connector detect and
 edid retrieval interface.<br><br>It supports i2c device over DP 1.2 
sideband msg protocol (EDID reads only)<br><br>bandwidth manager API via
 vcpi allocation and payload updating,<br>along with a helper to check 
the ACT status.<br><br>Objects:<br>MST topology manager - one per 
toplevel MST capable GPU port - not sure if this should be higher level 
again<br>MST branch unit - one instance per plugged branching unit - one
 at top of hierarchy - others hanging from ports<br>MST port - one port 
per port reported by branching units, can have MST units hanging from 
them as well.<br><br>Changes since initial posting:<br>a) add a mutex 
responsbile for the queues, it locks the sideband and msg slots, and 
msgs to transmit state<br>b) add worker to handle connection state 
change events, for MST device chaining and hotplug<br>c) add a payload 
spinlock<br>d) add path sideband msg support<br>e) fixup enum path 
resources transmit<br>f) reduce max dpcd msg to 16, as per DP1.2 spec.<br>g)
 separate tx queue kicking from irq processing and move irq acking back 
to drivers.<br><br>Changes since v0.2:<br>a) reorganise code,<br>b) drop
 ACT forcing code<br>c) add connector naming interface using path 
property<br>d) add topology dumper helper<br>e) proper reference 
counting and lookup for ports and mstbs.<br>f) move tx kicking into a 
workq<br>g) add aux locking - this should be redone<br>h) split teardown
 into two parts<br>i) start working on documentation on interface.<br><br>Changes
 since v0.3:<br>a) vc payload locking and tracking fixes<br>b) add 
hotplug callback into driver - replaces crazy return 1 scheme<br>c) 
txmsg + mst branch device refcount fixes<br>d) don't bail on mst 
shutdown if device is gone<br>e) change irq handler to take all 4 bytes 
of SINK_COUNT + ESI vectors<br>f) make DP payload updates timeout longer
 - observed on docking station redock<br>g) add more info to debugfs 
dumper<br><br>Changes since v0.4:<br>a) suspend/resume support<br>b) 
more debugging in debugfs<br><br>TODO:<br>misc features<br><br>Signed-off-by:
 Dave Airlie <a class="moz-txt-link-rfc2396E" href="mailto:airlied@redhat.com"><airlied@redhat.com></a><br>---<br> 
Documentation/DocBook/drm.tmpl        |    6 +<br> 
drivers/gpu/drm/Makefile              |    2 +-<br> 
drivers/gpu/drm/drm_dp_mst_topology.c | 2739 
+++++++++++++++++++++++++++++++++<br> include/drm/drm_dp_mst_helper.h   
    |  507 ++++++<br> 4 files changed, 3253 insertions(+), 1 deletion(-)<br>
 create mode 100644 drivers/gpu/drm/drm_dp_mst_topology.c<br> create 
mode 100644 include/drm/drm_dp_mst_helper.h<br><br>diff --git 
a/Documentation/DocBook/drm.tmpl b/Documentation/DocBook/drm.tmpl<br>index
 83dd0b0..1883976 100644<br>--- a/Documentation/DocBook/drm.tmpl<br>+++ 
b/Documentation/DocBook/drm.tmpl<br>@@ -2296,6 +2296,12 @@ void 
intel_crt_init(struct drm_device *dev)<br> 
!Edrivers/gpu/drm/drm_dp_helper.c<br>     </sect2><br>     
<sect2><br>+      <title>Display Port MST Helper Functions 
Reference</title><br>+!Pdrivers/gpu/drm/drm_dp_mst_topology.c dp 
mst helper<br>+!Iinclude/drm/drm_dp_mst_helper.h<br>+!Edrivers/gpu/drm/drm_dp_mst_topology.c<br>+
    </sect2><br>+    <sect2><br>       <title>EDID 
Helper Functions Reference</title><br> 
!Edrivers/gpu/drm/drm_edid.c<br>     </sect2><br>diff --git 
a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile<br>index 
48e38ba..712b73e 100644<br>--- a/drivers/gpu/drm/Makefile<br>+++ 
b/drivers/gpu/drm/Makefile<br>@@ -23,7 +23,7 @@ drm-$(CONFIG_DRM_PANEL) 
+= drm_panel.o<br> <br> drm-usb-y   := drm_usb.o<br> <br>-drm_kms_helper-y
 := drm_crtc_helper.o drm_dp_helper.o drm_probe_helper.o<br>+drm_kms_helper-y
 := drm_crtc_helper.o drm_dp_helper.o drm_probe_helper.o 
drm_dp_mst_topology.o<br> 
drm_kms_helper-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o<br> 
drm_kms_helper-$(CONFIG_DRM_KMS_FB_HELPER) += drm_fb_helper.o<br> 
drm_kms_helper-$(CONFIG_DRM_KMS_CMA_HELPER) += drm_fb_cma_helper.o<br>diff
 --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
b/drivers/gpu/drm/drm_dp_mst_topology.c<br>new file mode 100644<br>index
 0000000..ebd9292<br>--- /dev/null<br>+++ 
b/drivers/gpu/drm/drm_dp_mst_topology.c<br>@@ -0,0 +1,2739 @@<br>+/*<br>+
 * Copyright © 2014 Red Hat<br>+ *<br>+ * Permission to use, copy, 
modify, distribute, and sell this software and its<br>+ * documentation 
for any purpose is hereby granted without fee, provided that<br>+ * the 
above copyright notice appear in all copies and that both that copyright<br>+
 * notice and this permission notice appear in supporting documentation,
 and<br>+ * that the name of the copyright holders not be used in 
advertising or<br>+ * publicity pertaining to distribution of the 
software without specific,<br>+ * written prior permission.  The 
copyright holders make no representations<br>+ * about the suitability 
of this software for any purpose.  It is provided "as<br>+ * is" without
 express or implied warranty.<br>+ *<br>+ * THE COPYRIGHT HOLDERS 
DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,<br>+ * INCLUDING 
ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO<br>+ * 
EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR<br>+
 * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS 
OF USE,<br>+ * DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, 
NEGLIGENCE OR OTHER<br>+ * TORTIOUS ACTION, ARISING OUT OF OR IN 
CONNECTION WITH THE USE OR PERFORMANCE<br>+ * OF THIS SOFTWARE.<br>+ */<br>+<br>+#include
 <linux/kernel.h><br>+#include <linux/delay.h><br>+#include 
<linux/init.h><br>+#include <linux/errno.h><br>+#include 
<linux/sched.h><br>+#include <linux/i2c.h><br>+#include 
<drm/drm_dp_mst_helper.h><br>+#include <drm/drmP.h><br>+<br>+#include
 <drm/drm_fixed.h><br>+<br>+/**<br>+ * DOC: dp mst helper<br>+ *<br>+
 * These functions contain parts of the DisplayPort 1.2a MultiStream 
Transport<br>+ * protocol. The helpers contain a topology manager and 
bandwidth manager.<br>+ * The helpers encapsulate the sending and 
received of sideband msgs.<br>+ */<br>+static bool 
dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr,<br>+                             
char *buf);<br>+static int test_calc_pbn_mode(void);<br>+<br>+static 
void drm_dp_put_port(struct drm_dp_mst_port *port);<br>+<br>+static int 
drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *mgr,<br>+                                
    int id,<br>+                               struct drm_dp_payload *payload);<br>+<br>+static
 int drm_dp_send_dpcd_write(struct drm_dp_mst_topology_mgr *mgr,<br>+                     
          struct drm_dp_mst_port *port,<br>+                                int offset, int size, u8 
*bytes);<br>+<br>+static int drm_dp_send_link_address(struct 
drm_dp_mst_topology_mgr *mgr,<br>+                                    struct drm_dp_mst_branch 
*mstb);<br>+static int drm_dp_send_enum_path_resources(struct 
drm_dp_mst_topology_mgr *mgr,<br>+                                           struct drm_dp_mst_branch 
*mstb,<br>+                                          struct drm_dp_mst_port *port);<br>+static bool 
drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr,<br>+                             u8 
*guid);<br>+<br>+static int drm_dp_mst_register_i2c_bus(struct 
drm_dp_aux *aux);<br>+static void drm_dp_mst_unregister_i2c_bus(struct 
drm_dp_aux *aux);<br>+static void drm_dp_mst_kick_tx(struct 
drm_dp_mst_topology_mgr *mgr);<br>+/* sideband msg handling */<br>+static
 u8 drm_dp_msg_header_crc4(const uint8_t *data, size_t num_nibbles)<br>+{<br>+
        u8 bitmask = 0x80;<br>+   u8 bitshift = 7;<br>+     u8 array_index = 0;<br>+  
int number_of_bits = num_nibbles * 4;<br>+        u8 remainder = 0;<br>+<br>+ 
while (number_of_bits != 0) {<br>+                number_of_bits--;<br>+            remainder 
<<= 1;<br>+         remainder |= (data[array_index] & bitmask) 
>> bitshift;<br>+           bitmask >>= 1;<br>+         bitshift--;<br>+          if
 (bitmask == 0) {<br>+                    bitmask = 0x80;<br>+                      bitshift = 7;<br>+                        
array_index++;<br>+               }<br>+            if ((remainder & 0x10) == 0x10)<br>+                  
remainder ^= 0x13;<br>+   }<br>+<br>+ number_of_bits = 4;<br>+  while 
(number_of_bits != 0) {<br>+              number_of_bits--;<br>+            remainder 
<<= 1;<br>+         if ((remainder & 0x10) != 0)<br>+                     remainder ^= 
0x13;<br>+        }<br>+<br>+ return remainder;<br>+}<br>+<br>+static u8 
drm_dp_msg_data_crc4(const uint8_t *data, u8 number_of_bytes)<br>+{<br>+
        u8 bitmask = 0x80;<br>+   u8 bitshift = 7;<br>+     u8 array_index = 0;<br>+  
int number_of_bits = number_of_bytes * 8;<br>+    u16 remainder = 0;<br>+<br>+
        while (number_of_bits != 0) {<br>+                number_of_bits--;<br>+            remainder 
<<= 1;<br>+         remainder |= (data[array_index] & bitmask) 
>> bitshift;<br>+           bitmask >>= 1;<br>+         bitshift--;<br>+          if
 (bitmask == 0) {<br>+                    bitmask = 0x80;<br>+                      bitshift = 7;<br>+                        
array_index++;<br>+               }<br>+            if ((remainder & 0x100) == 0x100)<br>+        
                remainder ^= 0xd5;<br>+   }<br>+<br>+ number_of_bits = 8;<br>+  while 
(number_of_bits != 0) {<br>+              number_of_bits--;<br>+            remainder 
<<= 1;<br>+         if ((remainder & 0x100) != 0)<br><br>+                  
remainder ^= 0xd5;<br>+   }<br>+<br>+ return remainder & 0xff;<br>+}<br>+static
 inline u8 drm_dp_calc_sb_hdr_size(struct drm_dp_sideband_msg_hdr *hdr)<br>+{<br>+
        u8 size = 3;<br>+ size += (hdr->lct / 2);<br>+   return size;<br>+}<br>+<br>+static
 void drm_dp_encode_sideband_msg_hdr(struct drm_dp_sideband_msg_hdr 
*hdr,<br>+                                           u8 *buf, int *len)<br>+{<br>+    int idx = 0;<br>+ int i;<br>+
        u8 crc4;<br>+     buf[idx++] = ((hdr->lct & 0xf) << 4) | 
(hdr->lcr & 0xf);<br>+     for (i = 0; i < (hdr->lct / 2); i++)<br>+
                buf[idx++] = hdr->rad[i];<br>+ buf[idx++] = (hdr->broadcast 
<< 7) | (hdr->path_msg << 6) |<br>+                (hdr->msg_len 
& 0x3f);<br>+ buf[idx++] = (hdr->somt << 7) | (hdr->eomt
 << 6) | (hdr->seqno << 4);<br>+<br>+        crc4 = 
drm_dp_msg_header_crc4(buf, (idx * 2) - 1);<br>+  buf[idx - 1] |= (crc4 
& 0xf);<br>+<br>+       *len = idx;<br>+}<br>+<br>+static bool 
drm_dp_decode_sideband_msg_hdr(struct drm_dp_sideband_msg_hdr *hdr,<br>+
                                           u8 *buf, int buflen, u8 *hdrlen)<br>+{<br>+      u8 crc4;<br>+     u8 
len;<br>+ int i;<br>+       u8 idx;<br>+      if (buf[0] == 0)<br>+             return false;<br>+
        len = 3;<br>+     len += ((buf[0] & 0xf0) >> 4) / 2;<br>+ if (len
 > buflen)<br>+                return false;<br>+        crc4 = 
drm_dp_msg_header_crc4(buf, (len * 2) - 1);<br>+<br>+       if ((crc4 & 
0xf) != (buf[len - 1] & 0xf)) {<br>+          DRM_DEBUG_KMS("crc4 mismatch 
0x%x 0x%x\n", crc4, buf[len - 1]);<br>+              return false;<br>+        }<br>+<br>+ 
hdr->lct = (buf[0] & 0xf0) >> 4;<br>+        hdr->lcr = (buf[0]
 & 0xf);<br>+ idx = 1;<br>+     for (i = 0; i < (hdr->lct / 2); 
i++)<br>+         hdr->rad[i] = buf[idx++];<br>+ hdr->broadcast = 
(buf[idx] >> 7) & 0x1;<br>+     hdr->path_msg = (buf[idx] 
>> 6) & 0x1;<br>+       hdr->msg_len = buf[idx] & 0x3f;<br>+       
idx++;<br>+       hdr->somt = (buf[idx] >> 7) & 0x1;<br>+      
hdr->eomt = (buf[idx] >> 6) & 0x1;<br>+      hdr->seqno = 
(buf[idx] >> 4) & 0x1;<br>+     idx++;<br>+       *hdrlen = idx;<br>+       
return true;<br>+}<br>+<br>+static void 
drm_dp_encode_sideband_req(struct drm_dp_sideband_msg_req_body *req,<br>+
                                       struct drm_dp_sideband_msg_tx *raw)<br>+{<br>+       int idx = 0;<br>+
        int i;<br>+       u8 *buf = raw->msg;<br>+       buf[idx++] = req->req_type 
& 0x7f;<br>+<br>+       switch (req->req_type) {<br>+  case 
DP_ENUM_PATH_RESOURCES:<br>+              buf[idx] = (req->u.port_num.port_number
 & 0xf) << 4;<br>+              idx++;<br>+               break;<br>+       case 
DP_ALLOCATE_PAYLOAD:<br>+         buf[idx] = 
(req->u.allocate_payload.port_number & 0xf) << 4 |<br>+                      
(req->u.allocate_payload.number_sdp_streams & 0xf);<br>+           idx++;<br>+
                buf[idx] = (req->u.allocate_payload.vcpi & 0x7f);<br>+             idx++;<br>+
                buf[idx] = (req->u.allocate_payload.pbn >> 8);<br>+              idx++;<br>+
                buf[idx] = (req->u.allocate_payload.pbn & 0xff);<br>+              idx++;<br>+
                for (i = 0; i < req->u.allocate_payload.number_sdp_streams / 2; 
i++) {<br>+                       buf[idx] = ((req->u.allocate_payload.sdp_stream_sink[i *
 2] & 0xf) << 4) |<br>+                         
(req->u.allocate_payload.sdp_stream_sink[i * 2 + 1] & 0xf);<br>+   
                idx++;<br>+               }<br>+            if (req->u.allocate_payload.number_sdp_streams
 & 1) {<br>+                  i = req->u.allocate_payload.number_sdp_streams - 
1;<br>+                   buf[idx] = (req->u.allocate_payload.sdp_stream_sink[i] 
& 0xf) << 4;<br>+                       idx++;<br>+               }<br>+            break;<br>+       case 
DP_QUERY_PAYLOAD:<br>+            buf[idx] = (req->u.query_payload.port_number 
& 0xf) << 4;<br>+               idx++;<br>+               buf[idx] = 
(req->u.query_payload.vcpi & 0x7f);<br>+           idx++;<br>+               break;<br>+
        case DP_REMOTE_DPCD_READ:<br>+            buf[idx] = 
(req->u.dpcd_read.port_number & 0xf) << 4;<br>+              buf[idx] 
|= ((req->u.dpcd_read.dpcd_address & 0xf0000) >> 16) & 
0xf;<br>+         idx++;<br>+               buf[idx] = (req->u.dpcd_read.dpcd_address 
& 0xff00) >> 8;<br>+            idx++;<br>+               buf[idx] = 
(req->u.dpcd_read.dpcd_address & 0xff);<br>+               idx++;<br>+               
buf[idx] = (req->u.dpcd_read.num_bytes);<br>+          idx++;<br>+               break;<br>+<br>+
        case DP_REMOTE_DPCD_WRITE:<br>+           buf[idx] = 
(req->u.dpcd_write.port_number & 0xf) << 4;<br>+             buf[idx] 
|= ((req->u.dpcd_write.dpcd_address & 0xf0000) >> 16) &
 0xf;<br>+                idx++;<br>+               buf[idx] = (req->u.dpcd_write.dpcd_address 
& 0xff00) >> 8;<br>+            idx++;<br>+               buf[idx] = 
(req->u.dpcd_write.dpcd_address & 0xff);<br>+              idx++;<br>+               
buf[idx] = (req->u.dpcd_write.num_bytes);<br>+         idx++;<br>+               
memcpy(&buf[idx], req->u.dpcd_write.bytes, 
req->u.dpcd_write.num_bytes);<br>+             idx += 
req->u.dpcd_write.num_bytes;<br>+              break;<br>+       case 
DP_REMOTE_I2C_READ:<br>+          buf[idx] = (req->u.i2c_read.port_number 
& 0xf) << 4;<br>+               buf[idx] |= 
(req->u.i2c_read.num_transactions & 0x3);<br>+             idx++;<br>+               for 
(i = 0; i < (req->u.i2c_read.num_transactions & 0x3); i++) {<br>+
                        buf[idx] = req->u.i2c_read.transactions[i].i2c_dev_id & 0x7f;<br><br>+
                        idx++;<br>+                       buf[idx] = 
req->u.i2c_read.transactions[i].num_bytes;<br>+                        idx++;<br>+                       
memcpy(&buf[idx], req->u.i2c_read.transactions[i].bytes, 
req->u.i2c_read.transactions[i].num_bytes);<br>+                       idx += 
req->u.i2c_read.transactions[i].num_bytes;<br>+<br>+                     buf[idx] = 
(req->u.i2c_read.transactions[i].no_stop_bit & 0x1) << 5;<br>+
                        buf[idx] |= (req->u.i2c_read.transactions[i].i2c_transaction_delay
 & 0xf);<br>+                 idx++;<br>+               }<br>+            buf[idx] = 
(req->u.i2c_read.read_i2c_device_id) & 0x7f;<br>+          idx++;<br>+               
buf[idx] = (req->u.i2c_read.num_bytes_read);<br>+              idx++;<br>+               
break;<br>+<br>+    case DP_REMOTE_I2C_WRITE:<br>+            buf[idx] = 
(req->u.i2c_write.port_number & 0xf) << 4;<br>+              idx++;<br>+
                buf[idx] = (req->u.i2c_write.write_i2c_device_id) & 0x7f;<br>+     
        idx++;<br>+               buf[idx] = (req->u.i2c_write.num_bytes);<br>+          idx++;<br>+
                memcpy(&buf[idx], req->u.i2c_write.bytes, 
req->u.i2c_write.num_bytes);<br>+              idx += 
req->u.i2c_write.num_bytes;<br>+               break;<br>+       }<br>+    raw->cur_len =
 idx;<br>+}<br>+<br>+static void drm_dp_crc_sideband_chunk_req(u8 *msg, 
u8 len)<br>+{<br>+  u8 crc4;<br>+     crc4 = drm_dp_msg_data_crc4(msg, len);<br>+
        msg[len] = crc4;<br>+}<br>+<br>+static void 
drm_dp_encode_sideband_reply(struct drm_dp_sideband_msg_reply_body *rep,<br>+
                                         struct drm_dp_sideband_msg_tx *raw)<br>+{<br>+     int idx = 0;<br>+ 
u8 *buf = raw->msg;<br>+<br>+    buf[idx++] = (rep->reply_type & 
0x1) << 7 | (rep->req_type & 0x7f);<br>+<br>+  
raw->cur_len = idx;<br>+}<br>+<br>+/* this adds a chunk of msg to the
 builder to get the final msg */<br>+static bool 
drm_dp_sideband_msg_build(struct drm_dp_sideband_msg_rx *msg,<br>+                                  
    u8 *replybuf, u8 replybuflen, bool hdr)<br>+{<br>+      int ret;<br>+     u8 
crc4;<br>+<br>+     if (hdr) {<br>+           u8 hdrlen;<br>+           struct 
drm_dp_sideband_msg_hdr recv_hdr;<br>+            ret = 
drm_dp_decode_sideband_msg_hdr(&recv_hdr, replybuf, replybuflen, 
&hdrlen);<br>+                if (ret == false) {<br>+                  
print_hex_dump(KERN_DEBUG, "failed hdr", DUMP_PREFIX_NONE, 16, 1, 
replybuf, replybuflen, false);<br>+                       return false;<br>+                }<br>+<br>+         
/* get length contained in this portion */<br>+           msg->curchunk_len = 
recv_hdr.msg_len;<br>+            msg->curchunk_hdrlen = hdrlen;<br>+<br>+         /* 
we have already gotten an somt - don't bother parsing */<br>+             if 
(recv_hdr.somt && msg->have_somt)<br>+                 return false;<br>+<br>+
                if (recv_hdr.somt) {<br>+                 memcpy(&msg->initial_hdr, 
&recv_hdr, sizeof(struct drm_dp_sideband_msg_hdr));<br>+                      
msg->have_somt = true;<br>+            }<br>+            if (recv_hdr.eomt)<br>+                   
msg->have_eomt = true;<br>+<br>+         /* copy the bytes for the remainder
 of this header chunk */<br>+             msg->curchunk_idx = 
min(msg->curchunk_len, (u8)(replybuflen - hdrlen));<br>+               
memcpy(&msg->chunk[0], replybuf + hdrlen, msg->curchunk_idx);<br>+
        } else {<br>+             memcpy(&msg->chunk[msg->curchunk_idx], 
replybuf, replybuflen);<br>+              msg->curchunk_idx += replybuflen;<br>+ }<br>+<br>+
        if (msg->curchunk_idx >= msg->curchunk_len) {<br>+               /* do CRC 
*/<br>+           crc4 = drm_dp_msg_data_crc4(msg->chunk, msg->curchunk_len
 - 1);<br>+               /* copy chunk into bigger msg */<br>+             
memcpy(&msg->msg[msg->curlen], msg->chunk, 
msg->curchunk_len - 1);<br>+           msg->curlen += msg->curchunk_len -
 1;<br>+  }<br>+    return true;<br>+}<br>+<br>+static bool 
drm_dp_sideband_parse_link_address(struct drm_dp_sideband_msg_rx *raw,<br>+
                                               struct drm_dp_sideband_msg_reply_body *repmsg)<br>+{<br>+    
int idx = 1;<br>+ int i;<br>+       memcpy(repmsg->u.link_addr.guid, 
&raw->msg[idx], 16);<br>+  idx += 16;<br>+   
repmsg->u.link_addr.nports = raw->msg[idx] & 0xf;<br>+  idx++;<br>+
        if (idx > raw->curlen)<br>+         goto fail_len;<br>+       for (i = 0; i 
< repmsg->u.link_addr.nports; i++) {<br>+           if (raw->msg[idx] 
& 0x80)<br>+                  repmsg->u.link_addr.ports[i].input_port = 1;<br>+<br>+
                repmsg->u.link_addr.ports[i].peer_device_type = (raw->msg[idx] 
>> 4) & 0x7;<br>+               repmsg->u.link_addr.ports[i].port_number
 = (raw->msg[idx] & 0xf);<br>+<br>+          idx++;<br>+               if (idx > 
raw->curlen)<br>+                      goto fail_len;<br>+               
repmsg->u.link_addr.ports[i].mcs = (raw->msg[idx] >> 7) 
& 0x1;<br>+           repmsg->u.link_addr.ports[i].ddps = 
(raw->msg[idx] >> 6) & 0x1;<br>+             if 
(repmsg->u.link_addr.ports[i].input_port == 0)<br>+                    
repmsg->u.link_addr.ports[i].legacy_device_plug_status = 
(raw->msg[idx] >> 5) & 0x1;<br>+             idx++;<br>+               if (idx >
 raw->curlen)<br>+                     goto fail_len;<br>+               if 
(repmsg->u.link_addr.ports[i].input_port == 0) {<br>+                  
repmsg->u.link_addr.ports[i].dpcd_revision = (raw->msg[idx]);<br>+
                        idx++;<br>+                       if (idx > raw->curlen)<br>+                         goto fail_len;<br>+
                        memcpy(repmsg->u.link_addr.ports[i].peer_guid, 
&raw->msg[idx], 16);<br>+                  idx += 16;<br>+                   if (idx > 
raw->curlen)<br>+                              goto fail_len;<br>+                       
repmsg->u.link_addr.ports[i].num_sdp_streams = (raw->msg[idx] 
>> 4) & 0xf;<br>+                       
repmsg->u.link_addr.ports[i].num_sdp_stream_sinks = (raw->msg[idx]
 & 0xf);<br>+                 idx++;<br>+<br>+            }<br>+            if (idx > 
raw->curlen)<br>+                      goto fail_len;<br>+       }<br>+<br>+ return true;<br>+fail_len:<br>+
        DRM_DEBUG_KMS("link address reply parse length fail %d %d\n", idx, 
raw->curlen);<br>+     return false;<br>+}<br>+<br>+static bool 
drm_dp_sideband_parse_remote_dpcd_read(struct drm_dp_sideband_msg_rx 
*raw,<br>+                                                   struct drm_dp_sideband_msg_reply_body *repmsg)<br>+{<br>+
        int idx = 1;<br>+ repmsg->u.remote_dpcd_read_ack.port_number = 
raw->msg[idx] & 0xf;<br>+  idx++;<br>+       if (idx > 
raw->curlen)<br>+              goto fail_len;<br>+       
repmsg->u.remote_dpcd_read_ack.num_bytes = raw->msg[idx];<br>+      if 
(idx > raw->curlen)<br>+            goto fail_len;<br>+<br>+    
memcpy(repmsg->u.remote_dpcd_read_ack.bytes, &raw->msg[idx], 
repmsg->u.remote_dpcd_read_ack.num_bytes);<br>+        return true;<br>+fail_len:<br>+
        DRM_DEBUG_KMS("link address reply parse length fail %d %d\n", idx, 
raw->curlen);<br>+     return false;<br>+}<br>+<br>+static bool 
drm_dp_sideband_parse_remote_dpcd_write(struct drm_dp_sideband_msg_rx 
*raw,<br>+                                                      struct drm_dp_sideband_msg_reply_body *repmsg)<br>+{<br>+
        int idx = 1;<br>+ repmsg->u.remote_dpcd_write_ack.port_number = 
raw->msg[idx] & 0xf;<br>+  idx++;<br>+       if (idx > 
raw->curlen)<br>+              goto fail_len;<br>+       return true;<br>+fail_len:<br>+
        DRM_DEBUG_KMS("parse length fail %d %d\n", idx, raw->curlen);<br>+   
return false;<br>+}<br>+<br>+static bool 
drm_dp_sideband_parse_remote_i2c_read_ack(struct drm_dp_sideband_msg_rx 
*raw,<br>+                                                      struct drm_dp_sideband_msg_reply_body *repmsg)<br>+{<br>+
        int idx = 1;<br>+<br>+      repmsg->u.remote_i2c_read_ack.port_number = 
(raw->msg[idx] & 0xf);<br>+        idx++;<br>+       if (idx > 
raw->curlen)<br>+              goto fail_len;<br>+       
repmsg->u.remote_i2c_read_ack.num_bytes = raw->msg[idx];<br>+       
idx++;<br>+       /* TODO check */<br>+     
memcpy(repmsg->u.remote_i2c_read_ack.bytes, &raw->msg[idx], 
repmsg->u.remote_i2c_read_ack.num_bytes);<br>+ return true;<br>+fail_len:<br>+
        DRM_DEBUG_KMS("remote i2c reply parse length fail %d %d\n", idx, 
raw->curlen);<br>+     return false;<br>+}<br>+<br>+static bool 
drm_dp_sideband_parse_enum_path_resources_ack(struct 
drm_dp_sideband_msg_rx *raw,<br>+                                                   struct 
drm_dp_sideband_msg_reply_body *repmsg)<br>+{<br>+  int idx = 1;<br>+ 
repmsg->u.path_resources.port_number = (raw->msg[idx] >> 4) 
& 0xf;<br>+   idx++;<br>+       if (idx > raw->curlen)<br>+         goto 
fail_len;<br>+    repmsg->u.path_resources.full_payload_bw_number = 
(raw->msg[idx] << 8) | (raw->msg[idx+1]);<br>+        idx += 2;<br>+
        if (idx > raw->curlen)<br>+         goto fail_len;<br>+       
repmsg->u.path_resources.avail_payload_bw_number = (raw->msg[idx] 
<< 8) | (raw->msg[idx+1]);<br>+  idx += 2;<br>+    if (idx > 
raw->curlen)<br>+              goto fail_len;<br>+       return true;<br>+fail_len:<br>+
        DRM_DEBUG_KMS("enum resource parse length fail %d %d\n", idx, 
raw->curlen);<br>+     return false;<br>+}<br>+<br>+static bool 
drm_dp_sideband_parse_allocate_payload_ack(struct drm_dp_sideband_msg_rx
 *raw,<br>+                                                         struct drm_dp_sideband_msg_reply_body *repmsg)<br>+{<br>+
        int idx = 1;<br>+ repmsg->u.allocate_payload.port_number = 
(raw->msg[idx] >> 4) & 0xf;<br>+     idx++;<br>+       if (idx > 
raw->curlen)<br>+              goto fail_len;<br>+       
repmsg->u.allocate_payload.vcpi = raw->msg[idx];<br>+       idx++;<br>+       
if (idx > raw->curlen)<br>+         goto fail_len;<br>+       
repmsg->u.allocate_payload.allocated_pbn = (raw->msg[idx] <<
 8) | (raw->msg[idx+1]);<br>+  idx += 2;<br>+    if (idx > 
raw->curlen)<br>+              goto fail_len;<br>+       return true;<br>+fail_len:<br>+
        DRM_DEBUG_KMS("allocate payload parse length fail %d %d\n", idx, 
raw->curlen);<br>+     return false;<br>+}<br>+<br>+static bool 
drm_dp_sideband_parse_query_payload_ack(struct drm_dp_sideband_msg_rx 
*raw,<br>+                                                    struct drm_dp_sideband_msg_reply_body *repmsg)<br>+{<br>+
        int idx = 1;<br>+ repmsg->u.query_payload.port_number = 
(raw->msg[idx] >> 4) & 0xf;<br>+     idx++;<br>+       if (idx > 
raw->curlen)<br>+              goto fail_len;<br>+       
repmsg->u.query_payload.allocated_pbn = (raw->msg[idx] << 8)
 | (raw->msg[idx + 1]);<br>+   idx += 2;<br>+    if (idx > 
raw->curlen)<br>+              goto fail_len;<br>+       return true;<br>+fail_len:<br>+
        DRM_DEBUG_KMS("query payload parse length fail %d %d\n", idx, 
raw->curlen);<br>+     return false;<br>+}<br>+<br>+static bool 
drm_dp_sideband_parse_reply(struct drm_dp_sideband_msg_rx *raw,<br>+                              
        struct drm_dp_sideband_msg_reply_body *msg)<br>+{<br>+      memset(msg, 0, 
sizeof(*msg));<br>+       msg->reply_type = (raw->msg[0] & 0x80) 
>> 7;<br>+  msg->req_type = (raw->msg[0] & 0x7f);<br>+<br>+
        if (msg->reply_type) {<br>+            memcpy(msg->u.nak.guid, 
&raw->msg[1], 16);<br>+            msg->u.nak.reason = raw->msg[17];<br>+
                msg->u.nak.nak_data = raw->msg[18];<br>+            return false;<br>+        }<br>+<br>+
        switch (msg->req_type) {<br>+  case DP_LINK_ADDRESS:<br>+                return 
drm_dp_sideband_parse_link_address(raw, msg);<br>+        case 
DP_QUERY_PAYLOAD:<br>+            return 
drm_dp_sideband_parse_query_payload_ack(raw, msg);<br>+   case 
DP_REMOTE_DPCD_READ:<br>+         return 
drm_dp_sideband_parse_remote_dpcd_read(raw, msg);<br>+    case 
DP_REMOTE_DPCD_WRITE:<br>+                return 
drm_dp_sideband_parse_remote_dpcd_write(raw, msg);<br>+   case 
DP_REMOTE_I2C_READ:<br>+          return 
drm_dp_sideband_parse_remote_i2c_read_ack(raw, msg);<br>+ case 
DP_ENUM_PATH_RESOURCES:<br>+              return 
drm_dp_sideband_parse_enum_path_resources_ack(raw, msg);<br>+     case 
DP_ALLOCATE_PAYLOAD:<br>+         return 
drm_dp_sideband_parse_allocate_payload_ack(raw, msg);<br>+        default:<br>+
                DRM_ERROR("Got unknown reply 0x%02x\n", msg->req_type);<br>+         
return false;<br>+        }<br>+}<br>+<br>+static bool 
drm_dp_sideband_parse_connection_status_notify(struct 
drm_dp_sideband_msg_rx *raw,<br>+                                                    struct 
drm_dp_sideband_msg_req_body *msg)<br>+{<br>+       int idx = 1;<br>+<br>+      
msg->u.conn_stat.port_number = (raw->msg[idx] & 0xf0) >>
 4;<br>+  idx++;<br>+       if (idx > raw->curlen)<br>+         goto fail_len;<br>+<br>+
        memcpy(msg->u.conn_stat.guid, &raw->msg[idx], 16);<br>+ idx 
+= 16;<br>+       if (idx > raw->curlen)<br>+         goto fail_len;<br>+<br>+    
msg->u.conn_stat.legacy_device_plug_status = (raw->msg[idx] 
>> 6) & 0x1;<br>+       
msg->u.conn_stat.displayport_device_plug_status = (raw->msg[idx] 
>> 5) & 0x1;<br>+       
msg->u.conn_stat.message_capability_status = (raw->msg[idx] 
>> 4) & 0x1;<br>+       msg->u.conn_stat.input_port = 
(raw->msg[idx] >> 3) & 0x1;<br>+     
msg->u.conn_stat.peer_device_type = (raw->msg[idx] & 0x7);<br>+
        idx++;<br>+       return true;<br>+fail_len:<br>+     DRM_DEBUG_KMS("connection 
status reply parse length fail %d %d\n", idx, raw->curlen);<br>+  
return false;<br>+}<br>+<br>+static bool 
drm_dp_sideband_parse_resource_status_notify(struct 
drm_dp_sideband_msg_rx *raw,<br>+                                                    struct 
drm_dp_sideband_msg_req_body *msg)<br>+{<br>+       int idx = 1;<br>+<br>+      
msg->u.resource_stat.port_number = (raw->msg[idx] & 0xf0) 
>> 4;<br>+  idx++;<br>+       if (idx > raw->curlen)<br>+         goto 
fail_len;<br>+<br>+ memcpy(msg->u.resource_stat.guid, 
&raw->msg[idx], 16);<br>+  idx += 16;<br>+   if (idx > 
raw->curlen)<br>+              goto fail_len;<br>+<br>+    
msg->u.resource_stat.available_pbn = (raw->msg[idx] << 8) | 
(raw->msg[idx + 1]);<br>+      idx++;<br>+       return true;<br>+fail_len:<br>+
        DRM_DEBUG_KMS("resource status reply parse length fail %d %d\n", idx, 
raw->curlen);<br>+     return false;<br>+}<br>+<br>+static bool 
drm_dp_sideband_parse_req(struct drm_dp_sideband_msg_rx *raw,<br>+                                  
    struct drm_dp_sideband_msg_req_body *msg)<br>+{<br>+    memset(msg, 0, 
sizeof(*msg));<br>+       msg->req_type = (raw->msg[0] & 0x7f);<br>+<br>+
        switch (msg->req_type) {<br>+  case DP_CONNECTION_STATUS_NOTIFY:<br>+
                return drm_dp_sideband_parse_connection_status_notify(raw, msg);<br>+     
case DP_RESOURCE_STATUS_NOTIFY:<br>+              return 
drm_dp_sideband_parse_resource_status_notify(raw, msg);<br>+      default:<br>+
                DRM_ERROR("Got unknown request 0x%02x\n", msg->req_type);<br>+               
return false;<br>+        }<br>+}<br>+<br>+static int build_dpcd_write(struct 
drm_dp_sideband_msg_tx *msg, u8 port_num, u32 offset, u8 num_bytes, u8 
*bytes)<br>+{<br>+  struct drm_dp_sideband_msg_req_body req;<br>+<br>+  
req.req_type = DP_REMOTE_DPCD_WRITE;<br>+ req.u.dpcd_write.port_number =
 port_num;<br>+   req.u.dpcd_write.dpcd_address = offset;<br>+      
req.u.dpcd_write.num_bytes = num_bytes;<br>+      
memcpy(req.u.dpcd_write.bytes, bytes, num_bytes);<br>+    
drm_dp_encode_sideband_req(&req, msg);<br>+<br>+        return 0;<br>+}<br>+<br>+static
 int build_link_address(struct drm_dp_sideband_msg_tx *msg)<br>+{<br>+      
struct drm_dp_sideband_msg_req_body req;<br>+<br>+  req.req_type = 
DP_LINK_ADDRESS;<br>+     drm_dp_encode_sideband_req(&req, msg);<br>+   
return 0;<br>+}<br>+<br>+static int build_enum_path_resources(struct 
drm_dp_sideband_msg_tx *msg, int port_num)<br>+{<br>+       struct 
drm_dp_sideband_msg_req_body req;<br>+<br>+ req.req_type = 
DP_ENUM_PATH_RESOURCES;<br>+      req.u.port_num.port_number = port_num;<br>+
        drm_dp_encode_sideband_req(&req, msg);<br><br>+ msg->path_msg = 
true;<br>+        return 0;<br>+}<br>+<br>+static int 
build_allocate_payload(struct drm_dp_sideband_msg_tx *msg, int port_num,<br>+
                                  u8 vcpi, uint16_t pbn)<br>+{<br>+ struct 
drm_dp_sideband_msg_req_body req;<br>+    memset(&req, 0, sizeof(req));<br>+
        req.req_type = DP_ALLOCATE_PAYLOAD;<br>+  
req.u.allocate_payload.port_number = port_num;<br>+       
req.u.allocate_payload.vcpi = vcpi;<br>+  req.u.allocate_payload.pbn = 
pbn;<br>+ drm_dp_encode_sideband_req(&req, msg);<br>+   
msg->path_msg = true;<br>+     return 0;<br>+}<br>+<br>+static int 
drm_dp_mst_assign_payload_id(struct drm_dp_mst_topology_mgr *mgr,<br>+            
                        struct drm_dp_vcpi *vcpi)<br>+{<br>+        int ret;<br>+<br>+  
mutex_lock(&mgr->payload_lock);<br>+       ret = 
find_first_zero_bit(&mgr->payload_mask, mgr->max_payloads + 
1);<br>+  if (ret > mgr->max_payloads) {<br>+         ret = -EINVAL;<br>+       
        DRM_DEBUG_KMS("out of payload ids %d\n", ret);<br>+             goto out_unlock;<br>+
        }<br>+<br>+ set_bit(ret, &mgr->payload_mask);<br>+     vcpi->vcpi
 = ret;<br>+      mgr->proposed_vcpis[ret - 1] = vcpi;<br>+out_unlock:<br>+
        mutex_unlock(&mgr->payload_lock);<br>+     return ret;<br>+}<br>+<br>+static
 void drm_dp_mst_put_payload_id(struct drm_dp_mst_topology_mgr *mgr,<br>+
                                      int id)<br>+{<br>+    if (id == 0)<br>+         return;<br>+<br>+   
mutex_lock(&mgr->payload_lock);<br>+       DRM_DEBUG_KMS("putting 
payload %d\n", id);<br>+     clear_bit(id, &mgr->payload_mask);<br>+    
mgr->proposed_vcpis[id - 1] = NULL;<br>+       
mutex_unlock(&mgr->payload_lock);<br>+}<br>+<br>+static bool 
check_txmsg_state(struct drm_dp_mst_topology_mgr *mgr,<br>+                             
struct drm_dp_sideband_msg_tx *txmsg)<br>+{<br>+    bool ret;<br>+    
mutex_lock(&mgr->qlock);<br>+      ret = (txmsg->state == 
DRM_DP_SIDEBAND_TX_RX ||<br>+            txmsg->state == 
DRM_DP_SIDEBAND_TX_TIMEOUT);<br>+ mutex_unlock(&mgr->qlock);<br>+
        return ret;<br>+}<br>+<br>+static int drm_dp_mst_wait_tx_reply(struct 
drm_dp_mst_branch *mstb,<br>+                                 struct drm_dp_sideband_msg_tx 
*txmsg)<br>+{<br>+  struct drm_dp_mst_topology_mgr *mgr = mstb->mgr;<br>+
        int ret;<br>+<br>+  ret = wait_event_timeout(mgr->tx_waitq,<br>+                            
check_txmsg_state(mgr, txmsg),<br>+                                (4 * HZ));<br>+  
mutex_lock(&mstb->mgr->qlock);<br>+     if (ret > 0) {<br>+            if
 (txmsg->state == DRM_DP_SIDEBAND_TX_TIMEOUT) {<br>+                   ret = -EIO;<br>+
                        goto out;<br>+            }<br>+    } else {<br>+             DRM_DEBUG_KMS("timedout msg 
send %p %d %d\n", txmsg, txmsg->state, txmsg->seqno);<br>+<br>+          
/* dump some state */<br>+                ret = -EIO;<br>+<br>+               /* remove from q */<br>+
                if (txmsg->state == DRM_DP_SIDEBAND_TX_QUEUED ||<br>+              
txmsg->state == DRM_DP_SIDEBAND_TX_START_SEND) {<br>+                  
list_del(&txmsg->next);<br>+               }<br>+<br>+         if (txmsg->state ==
 DRM_DP_SIDEBAND_TX_START_SEND ||<br>+                txmsg->state == 
DRM_DP_SIDEBAND_TX_SENT) {<br>+                   mstb->tx_slots[txmsg->seqno] = 
NULL;<br>+                }<br>+    }<br>+out:<br>+     mutex_unlock(&mgr->qlock);<br>+<br>+
        return ret;<br>+}<br>+<br>+static struct drm_dp_mst_branch 
*drm_dp_add_mst_branch_device(u8 lct, u8 *rad)<br>+{<br>+   struct 
drm_dp_mst_branch *mstb;<br>+<br>+  mstb = kzalloc(sizeof(*mstb), 
GFP_KERNEL);<br>+ if (!mstb)<br>+           return NULL;<br>+<br>+      mstb->lct =
 lct;<br>+        if (lct > 1)<br>+              memcpy(mstb->rad, rad, lct / 2);<br>+
        INIT_LIST_HEAD(&mstb->ports);<br>+ 
kref_init(&mstb->kref);<br>+       return mstb;<br>+}<br>+<br>+static 
void drm_dp_destroy_mst_branch_device(struct kref *kref)<br>+{<br>+ 
struct drm_dp_mst_branch *mstb = container_of(kref, struct 
drm_dp_mst_branch, kref);<br>+    struct drm_dp_mst_port *port, *tmp;<br>+  
bool wake_tx = false;<br>+<br>+     
cancel_work_sync(&mstb->mgr->work);<br>+<br>+     /*<br>+    * 
destroy all ports - don't need lock<br>+   * as there are no more 
references to the mst branch<br>+  * device at this point.<br>+      */<br>+
        list_for_each_entry_safe(port, tmp, &mstb->ports, next) {<br>+             
list_del(&port->next);<br>+                drm_dp_put_port(port);<br>+       }<br>+<br>+
        /* drop any tx slots msg */<br>+  
mutex_lock(&mstb->mgr->qlock);<br>+     if (mstb->tx_slots[0]) {<br>+
                mstb->tx_slots[0]->state = DRM_DP_SIDEBAND_TX_TIMEOUT;<br>+         
mstb->tx_slots[0] = NULL;<br>+         wake_tx = true;<br>+      }<br>+    if 
(mstb->tx_slots[1]) {<br>+             mstb->tx_slots[1]->state = 
DRM_DP_SIDEBAND_TX_TIMEOUT;<br>+          mstb->tx_slots[1] = NULL;<br>+         
wake_tx = true;<br>+      }<br>+    mutex_unlock(&mstb->mgr->qlock);<br>+<br>+
        if (wake_tx)<br>+         wake_up(&mstb->mgr->tx_waitq);<br>+     
kfree(mstb);<br>+}<br>+<br>+static void 
drm_dp_put_mst_branch_device(struct drm_dp_mst_branch *mstb)<br>+{<br>+     
kref_put(&mstb->kref, drm_dp_destroy_mst_branch_device);<br>+}<br>+<br>+<br>+static
 void drm_dp_port_teardown_pdt(struct drm_dp_mst_port *port, int 
old_pdt)<br>+{<br>+ switch (old_pdt) {<br>+   case 
DP_PEER_DEVICE_DP_LEGACY_CONV:<br>+       case DP_PEER_DEVICE_SST_SINK:<br>+                
/* remove i2c over sideband */<br>+               
drm_dp_mst_unregister_i2c_bus(&port->aux);<br>+            break;<br>+       case
 DP_PEER_DEVICE_MST_BRANCHING:<br>+               
drm_dp_put_mst_branch_device(port->mstb);<br>+         port->mstb = NULL;<br>+
                break;<br>+       }<br>+}<br>+<br>+static void drm_dp_destroy_port(struct 
kref *kref)<br>+{<br>+      struct drm_dp_mst_port *port = container_of(kref,
 struct drm_dp_mst_port, kref);<br>+      struct drm_dp_mst_topology_mgr *mgr
 = port->mgr;<br>+     if (!port->input) {<br>+               
port->vcpi.num_slots = 0;<br>+         if (port->connector)<br>+                      
(*port->mgr->cbs->destroy_connector)(mgr, port->connector);<br>+
                drm_dp_port_teardown_pdt(port, port->pdt);<br>+<br>+             if 
(!port->input && port->vcpi.vcpi > 0)<br>+                       
drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi);<br>+  }<br>+    
kfree(port);<br>+}<br>+<br>+static void drm_dp_put_port(struct 
drm_dp_mst_port *port)<br>+{<br>+   kref_put(&port->kref, 
drm_dp_destroy_port);<br>+}<br>+<br>+static struct drm_dp_mst_branch 
*drm_dp_mst_get_validated_mstb_ref_locked(struct drm_dp_mst_branch 
*mstb, struct drm_dp_mst_branch *to_find)<br>+{<br>+        struct 
drm_dp_mst_port *port;<br>+       struct drm_dp_mst_branch *rmstb;<br>+     if 
(to_find == mstb) {<br>+          kref_get(&mstb->kref);<br>+                return 
mstb;<br>+        }<br>+    list_for_each_entry(port, &mstb->ports, next) {<br>+
                if (port->mstb) {<br>+                 rmstb = 
drm_dp_mst_get_validated_mstb_ref_locked(port->mstb, to_find);<br>+            
        if (rmstb)<br>+                           return rmstb;<br>+                }<br>+    }<br>+    return NULL;<br>+}<br>+<br>+static
 struct drm_dp_mst_branch *drm_dp_get_validated_mstb_ref(struct 
drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_branch *mstb)<br>+{<br>+
        struct drm_dp_mst_branch *rmstb = NULL;<br>+      
mutex_lock(&mgr->lock);<br>+       if (mgr->mst_primary)<br>+             rmstb
 = drm_dp_mst_get_validated_mstb_ref_locked(mgr->mst_primary, mstb);<br>+
        mutex_unlock(&mgr->lock);<br>+     return rmstb;<br>+}<br>+<br>+static
 struct drm_dp_mst_port *drm_dp_mst_get_port_ref_locked(struct 
drm_dp_mst_branch *mstb, struct drm_dp_mst_port *to_find)<br>+{<br>+        
struct drm_dp_mst_port *port, *mport;<br>+<br>+     
list_for_each_entry(port, &mstb->ports, next) {<br>+               if (port ==
 to_find) {<br>+                  kref_get(&port->kref);<br>+                        return port;<br>+
                }<br>+            if (port->mstb) {<br>+                 mport = 
drm_dp_mst_get_port_ref_locked(port->mstb, to_find);<br>+                      if 
(mport)<br>+                              return mport;<br>+                }<br>+    }<br>+    return NULL;<br>+}<br>+<br>+static
 struct drm_dp_mst_port *drm_dp_get_validated_port_ref(struct 
drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port)<br>+{<br>+      
struct drm_dp_mst_port *rport = NULL;<br>+        
mutex_lock(&mgr->lock);<br>+       if (mgr->mst_primary)<br>+             rport
 = drm_dp_mst_get_port_ref_locked(mgr->mst_primary, port);<br>+        
mutex_unlock(&mgr->lock);<br>+     return rport;<br>+}<br>+<br>+static
 struct drm_dp_mst_port *drm_dp_get_port(struct drm_dp_mst_branch *mstb,
 u8 port_num)<br>+{<br>+    struct drm_dp_mst_port *port;<br>+<br>+     
list_for_each_entry(port, &mstb->ports, next) {<br>+               if 
(port->port_num == port_num) {<br>+                    kref_get(&port->kref);<br>+
                        return port;<br>+         }<br>+    }<br>+<br>+ return NULL;<br>+}<br>+<br>+/*<br>+
 * calculate a new RAD for this MST branch device<br>+ * if parent has 
an LCT of 2 then it has 1 nibble of RAD,<br>+ * if parent has an LCT of 3
 then it has 2 nibbles of RAD,<br>+ */<br>+static u8 
drm_dp_calculate_rad(struct drm_dp_mst_port *port,<br>+                            u8 *rad)<br>+{<br>+
        int lct = port->parent->lct;<br>+   int shift = 4;<br>+       int idx = 
lct / 2;<br>+     if (lct > 1) {<br>+            memcpy(rad, 
port->parent->rad, idx);<br>+               shift = (lct % 2) ? 4 : 0;<br>+   } 
else<br>+         rad[0] = 0;<br>+<br>+       rad[idx] |= port->port_num << 
shift;<br>+       return lct + 1;<br>+}<br>+<br>+/*<br>+ * return sends link 
address for new mstb<br>+ */<br>+static bool 
drm_dp_port_setup_pdt(struct drm_dp_mst_port *port)<br>+{<br>+      int ret;<br>+
        u8 rad[6], lct;<br>+      bool send_link = false;<br>+      switch (port->pdt)
 {<br>+   case DP_PEER_DEVICE_DP_LEGACY_CONV:<br>+  case 
DP_PEER_DEVICE_SST_SINK:<br>+             /* add i2c over sideband */<br>+          ret = 
drm_dp_mst_register_i2c_bus(&port->aux);<br>+              break;<br>+       case 
DP_PEER_DEVICE_MST_BRANCHING:<br>+                lct = drm_dp_calculate_rad(port, 
rad);<br>+<br>+             port->mstb = drm_dp_add_mst_branch_device(lct, rad);<br>+
                port->mstb->mgr = port->mgr;<br>+                
port->mstb->port_parent = port;<br>+<br>+             send_link = true;<br>+    
        break;<br>+       }<br>+    return send_link;<br>+}<br>+<br>+static void 
drm_dp_check_port_guid(struct drm_dp_mst_branch *mstb,<br>+                                  struct
 drm_dp_mst_port *port)<br>+{<br>+  int ret;<br>+     if (port->dpcd_rev 
>= 0x12) {<br>+                port->guid_valid = 
drm_dp_validate_guid(mstb->mgr, port->guid);<br>+           if 
(!port->guid_valid) {<br>+                     ret = 
drm_dp_send_dpcd_write(mstb->mgr,<br>+                                              port,<br>+                                               
 DP_GUID,<br>+                                                 16, port->guid);<br>+                     port->guid_valid =
 true;<br>+               }<br>+    }<br>+}<br>+<br>+static void 
build_mst_prop_path(struct drm_dp_mst_port *port,<br>+                            struct 
drm_dp_mst_branch *mstb,<br>+                             char *proppath)<br>+{<br>+  int i;<br>+       
char temp[8];<br>+        snprintf(proppath, 255, "mst:%d", 
mstb->mgr->conn_base_id);<br>+      for (i = 0; i < (mstb->lct - 
1); i++) {<br>+           int shift = (i % 2) ? 0 : 4;<br>+         int port_num = 
mstb->rad[i / 2] >> shift;<br>+          snprintf(temp, 8, "-%d", 
port_num);<br>+           strncat(proppath, temp, 255);<br>+        }<br>+    
snprintf(temp, 8, "-%d", port->port_num);<br>+       strncat(proppath, 
temp, 255);<br>+}<br>+<br>+static void drm_dp_add_port(struct 
drm_dp_mst_branch *mstb,<br>+                         struct device *dev,<br>+                          
struct drm_dp_link_addr_reply_port *port_msg)<br>+{<br>+    struct 
drm_dp_mst_port *port;<br>+       bool ret;<br>+    bool created = false;<br>+        
int old_pdt = 0;<br>+     int old_ddps = 0;<br>+    port = 
drm_dp_get_port(mstb, port_msg->port_number);<br>+     if (!port) {<br>+ 
        port = kzalloc(sizeof(*port), GFP_KERNEL);<br>+           if (!port)<br>+                   
return;<br>+              kref_init(&port->kref);<br>+               port->parent = 
mstb;<br>+                port->port_num = port_msg->port_number;<br>+                
port->mgr = mstb->mgr;<br>+         port->aux.name = "DPMST";<br>+               
port->aux.dev = dev;<br>+              created = true;<br>+      } else {<br>+             
old_pdt = port->pdt;<br>+              old_ddps = port->ddps;<br>+    }<br>+<br>+
        port->pdt = port_msg->peer_device_type;<br>+        port->input = 
port_msg->input_port;<br>+     port->mcs = port_msg->mcs;<br>+     
port->ddps = port_msg->ddps;<br>+   port->ldps = 
port_msg->legacy_device_plug_status;<br>+      port->dpcd_rev = 
port_msg->dpcd_revision;<br>+<br>+       memcpy(port->guid, 
port_msg->peer_guid, 16);<br>+<br>+      /* manage mstb port lists with 
mgr lock - take a reference<br>+     for this list */<br>+  if (created) {<br>+
                mutex_lock(&mstb->mgr->lock);<br>+              
kref_get(&port->kref);<br>+                list_add(&port->next, 
&mstb->ports);<br>+                mutex_unlock(&mstb->mgr->lock);<br>+
        }<br>+<br>+ if (old_ddps != port->ddps) {<br>+             if (port->ddps) {<br>+
                        drm_dp_check_port_guid(mstb, port);<br>+                  if (!port->input)<br>+ 
                        drm_dp_send_enum_path_resources(mstb->mgr, mstb, port);<br>+           } 
else {<br>+                       port->guid_valid = false;<br>+                 port->available_pbn
 = 0;<br>+                        }<br>+    }<br>+<br>+ if (old_pdt != port->pdt && 
!port->input) {<br>+           drm_dp_port_teardown_pdt(port, old_pdt);<br>+<br>+
                ret = drm_dp_port_setup_pdt(port);<br>+           if (ret == true) {<br>+                   
drm_dp_send_link_address(mstb->mgr, port->mstb);<br>+                       
port->mstb->link_address_sent = true;<br>+          }<br>+    }<br>+<br>+ if 
(created && !port->input) {<br>+               char proppath[255];<br>+          
build_mst_prop_path(port, mstb, proppath);<br>+           port->connector = 
(*mstb->mgr->cbs->add_connector)(mstb->mgr, port, proppath);<br>+
        }<br>+<br>+ /* put reference to this port */<br>+     
drm_dp_put_port(port);<br>+}<br>+<br>+static void 
drm_dp_update_port(struct drm_dp_mst_branch *mstb,<br>+                          struct 
drm_dp_connection_status_notify *conn_stat)<br>+{<br>+      struct 
drm_dp_mst_port *port;<br>+       int old_pdt;<br>+ int old_ddps;<br>+        bool 
dowork = false;<br>+      port = drm_dp_get_port(mstb, 
conn_stat->port_number);<br>+  if (!port)<br>+           return;<br>+<br>+   
old_ddps = port->ddps;<br>+    old_pdt = port->pdt;<br>+      port->pdt
 = conn_stat->peer_device_type;<br>+   port->mcs = 
conn_stat->message_capability_status;<br>+     port->ldps = 
conn_stat->legacy_device_plug_status;<br>+     port->ddps = 
conn_stat->displayport_device_plug_status;<br>+<br>+     if (old_ddps != 
port->ddps) {<br>+             if (port->ddps) {<br>+                 
drm_dp_check_port_guid(mstb, port);<br>+                  dowork = true;<br>+               } else {<br>+
                        port->guid_valid = false;<br>+                 port->available_pbn = 0;<br>+  
        }<br>+    }<br>+    if (old_pdt != port->pdt && !port->input) {<br>+
                drm_dp_port_teardown_pdt(port, old_pdt);<br>+<br>+          if 
(drm_dp_port_setup_pdt(port))<br>+                        dowork = true;<br>+       }<br>+<br>+ 
drm_dp_put_port(port);<br>+       if (dowork)<br>+          queue_work(system_long_wq,
 &mstb->mgr->work);<br>+<br>+}<br>+<br>+static struct 
drm_dp_mst_branch *drm_dp_get_mst_branch_device(struct 
drm_dp_mst_topology_mgr *mgr,<br>+                                                               u8 lct, u8 *rad)<br>+{<br>+
        struct drm_dp_mst_branch *mstb;<br>+      struct drm_dp_mst_port *port;<br>+
        int i;<br>+       /* find the port by iterating down */<br>+        mstb = 
mgr->mst_primary;<br>+<br>+      for (i = 0; i < lct - 1; i++) {<br>+           
int shift = (i % 2) ? 0 : 4;<br>+         int port_num = rad[i / 2] >> 
shift;<br>+<br>+            list_for_each_entry(port, &mstb->ports, next) {<br>+
                        if (port->port_num == port_num) {<br>+                         if (!port->mstb) {<br>+
                                        DRM_ERROR("failed to lookup MSTB with lct %d, rad %02x\n", lct, 
rad[0]);<br>+                                     return NULL;<br>+                         }<br>+<br>+                         mstb = 
port->mstb;<br>+                               break;<br>+                       }<br>+            }<br>+    }<br>+    
kref_get(&mstb->kref);<br>+        return mstb;<br>+}<br>+<br>+static 
void drm_dp_check_and_send_link_address(struct drm_dp_mst_topology_mgr 
*mgr,<br>+                                               struct drm_dp_mst_branch *mstb)<br>+{<br>+   struct 
drm_dp_mst_port *port;<br>+<br>+    if (!mstb->link_address_sent) {<br>+
                drm_dp_send_link_address(mgr, mstb);<br>+         mstb->link_address_sent =
 true;<br>+       }<br>+    list_for_each_entry(port, &mstb->ports, next) {<br>+
                if (port->input)<br>+                  continue;<br>+<br>+         if (!port->ddps)<br>+
                        continue;<br>+<br>+         if (!port->available_pbn)<br>+                 
drm_dp_send_enum_path_resources(mgr, mstb, port);<br>+<br>+         if 
(port->mstb)<br>+                      drm_dp_check_and_send_link_address(mgr, 
port->mstb);<br>+      }<br>+}<br>+<br>+static void 
drm_dp_mst_link_probe_work(struct work_struct *work)<br>+{<br>+     struct 
drm_dp_mst_topology_mgr *mgr = container_of(work, struct 
drm_dp_mst_topology_mgr, work);<br>+<br>+   
drm_dp_check_and_send_link_address(mgr, mgr->mst_primary);<br>+<br>+}<br>+<br>+static
 bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr,<br>+                              
 u8 *guid)<br>+{<br>+       static u8 zero_guid[16];<br>+<br>+  if 
(!memcmp(guid, zero_guid, 16)) {<br>+             u64 salt = get_jiffies_64();<br>+
                memcpy(&guid[0], &salt, sizeof(u64));<br>+                
memcpy(&guid[8], &salt, sizeof(u64));<br>+                return false;<br>+        }<br>+
        return true;<br>+}<br>+<br>+#if 0<br>+static int build_dpcd_read(struct
 drm_dp_sideband_msg_tx *msg, u8 port_num, u32 offset, u8 num_bytes)<br>+{<br>+
        struct drm_dp_sideband_msg_req_body req;<br>+<br>+  req.req_type = 
DP_REMOTE_DPCD_READ;<br>+ req.u.dpcd_read.port_number = port_num;<br>+      
req.u.dpcd_read.dpcd_address = offset;<br>+       req.u.dpcd_read.num_bytes = 
num_bytes;<br>+   drm_dp_encode_sideband_req(&req, msg);<br>+<br>+        
return 0;<br>+}<br>+#endif<br>+<br>+static int 
drm_dp_send_sideband_msg(struct drm_dp_mst_topology_mgr *mgr,<br>+                                  
  bool up, u8 *msg, int len)<br>+{<br>+     int ret;<br>+     int regbase = up ?
 DP_SIDEBAND_MSG_UP_REP_BASE : DP_SIDEBAND_MSG_DOWN_REQ_BASE;<br>+        int 
tosend, total, offset;<br>+       int retries = 0;<br>+<br>+retry:<br>+ total =
 len;<br>+        offset = 0;<br>+  do {<br>+         tosend = 
min3(mgr->max_dpcd_transaction_bytes, 16, total);<br>+<br>+              
mutex_lock(&mgr->aux_lock);<br>+           ret = 
drm_dp_dpcd_write(mgr->aux, regbase + offset,<br>+                                     
&msg[offset],<br>+                                    tosend);<br>+             
mutex_unlock(&mgr->aux_lock);<br>+         if (ret != tosend) {<br>+                 
if (ret == -EIO && retries < 5) {<br>+                         retries++;<br>+                           
goto retry;<br>+                  }<br>+                    DRM_DEBUG_KMS("failed to dpcd write %d 
%d\n", tosend, ret);<br>+                    WARN(1, "fail\n");<br>+<br>+                      return -EIO;<br>+
                }<br>+            offset += tosend;<br>+            total -= tosend;<br>+     } while (total 
> 0);<br>+     return 0;<br>+}<br>+<br>+static int 
set_hdr_from_dst_qlock(struct drm_dp_sideband_msg_hdr *hdr,<br>+                            
struct drm_dp_sideband_msg_tx *txmsg)<br>+{<br>+    struct 
drm_dp_mst_branch *mstb = txmsg->dst;<br>+<br>+  /* both msg slots are
 full */<br>+     if (txmsg->seqno == -1) {<br>+         if 
(mstb->tx_slots[0] && mstb->tx_slots[1]) {<br>+                     
DRM_DEBUG_KMS("%s: failed to find slot\n", __func__);<br>+                      return 
-EAGAIN;<br>+             }<br>+            if (mstb->tx_slots[0] == NULL && 
mstb->tx_slots[1] == NULL) {<br>+                      txmsg->seqno = 
mstb->last_seqno;<br>+                 mstb->last_seqno ^= 1;<br>+            } else if 
(mstb->tx_slots[0] == NULL)<br>+                       txmsg->seqno = 0;<br>+         else<br>+
                        txmsg->seqno = 1;<br>+         mstb->tx_slots[txmsg->seqno] = 
txmsg;<br>+       }<br>+    hdr->broadcast = 0;<br>+       hdr->path_msg = 
txmsg->path_msg;<br>+  hdr->lct = mstb->lct;<br>+  hdr->lcr = 
mstb->lct - 1;<br>+    if (mstb->lct > 1)<br>+             
memcpy(hdr->rad, mstb->rad, mstb->lct / 2);<br>+ hdr->seqno =
 txmsg->seqno;<br>+    return 0;<br>+}<br>+/*<br>+ * process a single 
block of the next message in the sideband queue<br>+ */<br>+static int 
process_single_tx_qlock(struct drm_dp_mst_topology_mgr *mgr,<br>+                            
struct drm_dp_sideband_msg_tx *txmsg,<br>+                                   bool up)<br>+{<br>+      u8 
chunk[48];<br>+   struct drm_dp_sideband_msg_hdr hdr;<br>+  int len, space,
 idx, tosend;<br>+        int ret;<br>+<br>+  if (txmsg->state == 
DRM_DP_SIDEBAND_TX_QUEUED) {<br>+         txmsg->seqno = -1;<br>+                
txmsg->state = DRM_DP_SIDEBAND_TX_START_SEND;<br>+     }<br>+<br>+ /* 
make hdr from dst mst - for replies use seqno<br>+           otherwise assign 
one */<br>+       ret = set_hdr_from_dst_qlock(&hdr, txmsg);<br>+       if (ret 
< 0)<br>+              return ret;<br>+<br>+       /* amount left to send in this 
message */<br>+   len = txmsg->cur_len - txmsg->cur_offset;<br>+<br>+
        /* 48 - sideband msg size - 1 byte for data CRC, x header bytes */<br>+
        space = 48 - 1 - drm_dp_calc_sb_hdr_size(&hdr);<br>+<br>+       tosend = 
min(len, space);<br>+     if (len == txmsg->cur_len)<br>+                hdr.somt = 1;<br>+
        if (space >= len)<br>+         hdr.eomt = 1;<br>+<br>+<br>+  hdr.msg_len = 
tosend + 1;<br>+  drm_dp_encode_sideband_msg_hdr(&hdr, chunk, 
&idx);<br>+   memcpy(&chunk[idx], 
&txmsg->msg[txmsg->cur_offset], tosend);<br>+   /* add crc at 
end */<br>+       drm_dp_crc_sideband_chunk_req(&chunk[idx], tosend);<br>+
        idx += tosend + 1;<br>+<br>+        ret = drm_dp_send_sideband_msg(mgr, up, 
chunk, idx);<br>+ if (ret) {<br>+           DRM_DEBUG_KMS("sideband msg failed to
 send\n");<br>+              return ret;<br>+  }<br>+<br>+ txmsg->cur_offset += 
tosend;<br>+      if (txmsg->cur_offset == txmsg->cur_len) {<br>+             
txmsg->state = DRM_DP_SIDEBAND_TX_SENT;<br>+           return 1;<br>+    }<br>+    
return 0;<br>+}<br>+<br>+/* must be called holding qlock */<br>+static 
void process_single_down_tx_qlock(struct drm_dp_mst_topology_mgr *mgr)<br>+{<br>+
        struct drm_dp_sideband_msg_tx *txmsg;<br>+        int ret;<br>+<br>+  /* 
construct a chunk from the first msg in the tx_msg queue */<br>+  if 
(list_empty(&mgr->tx_msg_downq)) {<br>+            
mgr->tx_down_in_progress = false;<br>+         return;<br>+      }<br>+    
mgr->tx_down_in_progress = true;<br>+<br>+       txmsg = 
list_first_entry(&mgr->tx_msg_downq, struct 
drm_dp_sideband_msg_tx, next);<br>+       ret = process_single_tx_qlock(mgr, 
txmsg, false);<br>+       if (ret == 1) {<br>+              /* txmsg is sent it should be 
in the slots now */<br>+          list_del(&txmsg->next);<br>+       } else if 
(ret) {<br>+              DRM_DEBUG_KMS("failed to send msg in q %d\n", ret);<br>+                
list_del(&txmsg->next);<br>+               if (txmsg->seqno != -1)<br>+                   
txmsg->dst->tx_slots[txmsg->seqno] = NULL;<br>+          
txmsg->state = DRM_DP_SIDEBAND_TX_TIMEOUT;<br>+                
wake_up(&mgr->tx_waitq);<br>+      }<br>+    if 
(list_empty(&mgr->tx_msg_downq)) {<br>+            
mgr->tx_down_in_progress = false;<br>+         return;<br>+      }<br>+}<br>+<br>+/*
 called holding qlock */<br>+static void 
process_single_up_tx_qlock(struct drm_dp_mst_topology_mgr *mgr)<br>+{<br>+
        struct drm_dp_sideband_msg_tx *txmsg;<br>+        int ret;<br>+<br>+  /* 
construct a chunk from the first msg in the tx_msg queue */<br>+  if 
(list_empty(&mgr->tx_msg_upq)) {<br>+              mgr->tx_up_in_progress =
 false;<br>+              return;<br>+      }<br>+<br>+ txmsg = 
list_first_entry(&mgr->tx_msg_upq, struct drm_dp_sideband_msg_tx,
 next);<br>+      ret = process_single_tx_qlock(mgr, txmsg, true);<br>+     if 
(ret == 1) {<br>+         /* up txmsgs aren't put in slots - so free after we 
send it */<br>+           list_del(&txmsg->next);<br>+               kfree(txmsg);<br>+
        } else if (ret)<br>+              DRM_DEBUG_KMS("failed to send msg in q %d\n", 
ret);<br>+        mgr->tx_up_in_progress = true;<br>+}<br>+<br>+static void 
drm_dp_queue_down_tx(struct drm_dp_mst_topology_mgr *mgr,<br>+                             
struct drm_dp_sideband_msg_tx *txmsg)<br>+{<br>+    
mutex_lock(&mgr->qlock);<br>+      list_add_tail(&txmsg->next, 
&mgr->tx_msg_downq);<br>+  if (!mgr->tx_down_in_progress)<br>+    
        process_single_down_tx_qlock(mgr);<br>+   
mutex_unlock(&mgr->qlock);<br>+}<br>+<br>+static int 
drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,<br>+                                  
  struct drm_dp_mst_branch *mstb)<br>+{<br>+        int len;<br>+     struct 
drm_dp_sideband_msg_tx *txmsg;<br>+       int ret;<br>+<br>+  txmsg = 
kzalloc(sizeof(*txmsg), GFP_KERNEL);<br>+ if (!txmsg)<br>+          return 
-ENOMEM;<br>+<br>+  txmsg->dst = mstb;<br>+        len = 
build_link_address(txmsg);<br>+<br>+        drm_dp_queue_down_tx(mgr, txmsg);<br>+<br>+
        ret = drm_dp_mst_wait_tx_reply(mstb, txmsg);<br>+ if (ret > 0) {<br>+
                int i;<br>+<br>+            if (txmsg->reply.reply_type == 1)<br>+                 
DRM_DEBUG_KMS("link address nak received\n");<br>+              else {<br>+                       
DRM_DEBUG_KMS("link address reply: %d\n", 
txmsg->reply.u.link_addr.nports);<br>+                 for (i = 0; i < 
txmsg->reply.u.link_addr.nports; i++) {<br>+                           DRM_DEBUG_KMS("port 
%d: input %d, pdt: %d, pn: %d, dpcd_rev: %02x, mcs: %d, ddps: %d, ldps 
%d\n", i,<br>+                                      
txmsg->reply.u.link_addr.ports[i].input_port,<br>+                                    
txmsg->reply.u.link_addr.ports[i].peer_device_type,<br>+                                      
txmsg->reply.u.link_addr.ports[i].port_number,<br>+                                   
txmsg->reply.u.link_addr.ports[i].dpcd_revision,<br>+                                 
txmsg->reply.u.link_addr.ports[i].mcs,<br>+                                   
txmsg->reply.u.link_addr.ports[i].ddps,<br>+                                  
txmsg->reply.u.link_addr.ports[i].legacy_device_plug_status);<br>+                     }<br>+
                        for (i = 0; i < txmsg->reply.u.link_addr.nports; i++) {<br>+                        
        drm_dp_add_port(mstb, mgr->dev, 
&txmsg->reply.u.link_addr.ports[i]);<br>+                  }<br>+                    
(*mgr->cbs->hotplug)(mgr);<br>+             }<br>+    } else<br>+               
DRM_DEBUG_KMS("link address failed %d\n", ret);<br>+<br>+ kfree(txmsg);<br>+
        return 0;<br>+}<br>+<br>+static int 
drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,<br>+
                                           struct drm_dp_mst_branch *mstb,<br>+                                      struct 
drm_dp_mst_port *port)<br>+{<br>+   int len;<br>+     struct 
drm_dp_sideband_msg_tx *txmsg;<br>+       int ret;<br>+<br>+  txmsg = 
kzalloc(sizeof(*txmsg), GFP_KERNEL);<br>+ if (!txmsg)<br>+          return 
-ENOMEM;<br>+<br>+  txmsg->dst = mstb;<br>+        len = 
build_enum_path_resources(txmsg, port->port_num);<br>+<br>+      
drm_dp_queue_down_tx(mgr, txmsg);<br>+<br>+ ret = 
drm_dp_mst_wait_tx_reply(mstb, txmsg);<br>+       if (ret > 0) {<br>+            if 
(txmsg->reply.reply_type == 1)<br>+                    DRM_DEBUG_KMS("enum path 
resources nak received\n");<br>+             else {<br>+                       if (port->port_num !=
 txmsg->reply.u.path_resources.port_number)<br>+                               DRM_ERROR("got 
incorrect port in response\n");<br>+                 DRM_DEBUG_KMS("enum path 
resources %d: %d %d\n", txmsg->reply.u.path_resources.port_number, 
txmsg->reply.u.path_resources.full_payload_bw_number,<br>+                            
txmsg->reply.u.path_resources.avail_payload_bw_number);<br>+                   
port->available_pbn = 
txmsg->reply.u.path_resources.avail_payload_bw_number;<br>+            }<br>+    }<br>+<br>+
        kfree(txmsg);<br>+        return 0;<br>+}<br>+<br>+int 
drm_dp_payload_send_msg(struct drm_dp_mst_topology_mgr *mgr,<br>+                     
struct drm_dp_mst_port *port,<br>+                            int id,<br>+                      int pbn)<br>+{<br>+
        struct drm_dp_sideband_msg_tx *txmsg;<br>+        struct drm_dp_mst_branch 
*mstb;<br>+       int len, ret;<br>+<br>+     mstb = 
drm_dp_get_validated_mstb_ref(mgr, port->parent);<br>+ if (!mstb)<br>+
                return -EINVAL;<br>+<br>+   txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);<br>+
        if (!txmsg) {<br>+                ret = -ENOMEM;<br>+               goto fail_put;<br>+       }<br>+<br>+
        txmsg->dst = mstb;<br>+        len = build_allocate_payload(txmsg, 
port->port_num,<br>+                                id,<br>+                                  pbn);<br>+<br>+        
drm_dp_queue_down_tx(mgr, txmsg);<br>+<br>+ ret = 
drm_dp_mst_wait_tx_reply(mstb, txmsg);<br>+       if (ret > 0) {<br>+            if 
(txmsg->reply.reply_type == 1) {<br>+                  ret = -EINVAL;<br>+               } else<br>+
                        ret = 0;<br>+     }<br>+    kfree(txmsg);<br>+fail_put:<br>+    
drm_dp_put_mst_branch_device(mstb);<br>+  return ret;<br>+}<br>+<br>+static
 int drm_dp_create_payload_step1(struct drm_dp_mst_topology_mgr *mgr,<br>+
                                       int id,<br>+                                      struct drm_dp_payload *payload)<br>+{<br>+
        int ret;<br>+<br>+  ret = drm_dp_dpcd_write_payload(mgr, id, payload);<br>+
        if (ret < 0) {<br>+            payload->payload_state = 0;<br>+               return 
ret;<br>+ }<br>+    payload->payload_state = DP_PAYLOAD_LOCAL;<br>+        
return 0;<br>+}<br>+<br>+int drm_dp_create_payload_step2(struct 
drm_dp_mst_topology_mgr *mgr,<br>+                                struct drm_dp_mst_port *port,<br>+
                                int id,<br>+                              struct drm_dp_payload *payload)<br>+{<br>+  int ret;<br>+
        ret = drm_dp_payload_send_msg(mgr, port, id, port->vcpi.pbn);<br>+     
if (ret < 0)<br>+              return ret;<br>+  payload->payload_state = 
DP_PAYLOAD_REMOTE;<br>+   return ret;<br>+}<br>+<br>+int 
drm_dp_destroy_payload_step1(struct drm_dp_mst_topology_mgr *mgr,<br>+            
                 struct drm_dp_mst_port *port,<br>+                                int id,<br>+                              struct 
drm_dp_payload *payload)<br>+{<br>+ DRM_DEBUG_KMS("\n");<br>+       /* its 
okay for these to fail */<br>+    if (port) {<br>+          
drm_dp_payload_send_msg(mgr, port, id, 0);<br>+   }<br>+<br>+ 
drm_dp_dpcd_write_payload(mgr, id, payload);<br>+ 
payload->payload_state = 0;<br>+       return 0;<br>+}<br>+<br>+int 
drm_dp_destroy_payload_step2(struct drm_dp_mst_topology_mgr *mgr,<br>+            
                 int id,<br>+                              struct drm_dp_payload *payload)<br>+{<br>+ 
payload->payload_state = 0;<br>+       return 0;<br>+}<br>+<br>+/**<br>+ * 
drm_dp_update_payload_part1() - Execute payload update part 1<br>+ * 
@mgr: manager to use.<br>+ *<br>+ * This iterates over all proposed 
virtual channels, and tries to<br>+ * allocate space in the link for 
them. For 0->slots transitions,<br>+ * this step just writes the VCPI
 to the MST device. For slots->0<br>+ * transitions, this writes the 
updated VCPIs and removes the<br>+ * remote VC payloads.<br>+ *<br>+ * 
after calling this the driver should generate ACT and payload<br>+ * 
packets.<br>+ */<br>+int drm_dp_update_payload_part1(struct 
drm_dp_mst_topology_mgr *mgr)<br>+{<br>+    int i;<br>+       int cur_slots = 1;<br>+
        struct drm_dp_payload req_payload;<br>+   struct drm_dp_mst_port *port;<br>+<br>+
        mutex_lock(&mgr->payload_lock);<br>+       for (i = 0; i < 
mgr->max_payloads; i++) {<br>+         /* solve the current payloads - 
compare to the hw ones<br>+                  - update the hw view */<br>+           
req_payload.start_slot = cur_slots;<br>+          if (mgr->proposed_vcpis[i])
 {<br>+                   port = container_of(mgr->proposed_vcpis[i], struct 
drm_dp_mst_port, vcpi);<br>+                      req_payload.num_slots = 
mgr->proposed_vcpis[i]->num_slots;<br>+             } else {<br>+                     port = 
NULL;<br>+                        req_payload.num_slots = 0;<br>+           }<br>+            /* work out what 
is required to happen with this payload */<br>+           if 
(mgr->payloads[i].start_slot != req_payload.start_slot ||<br>+             
mgr->payloads[i].num_slots != req_payload.num_slots) {<br>+<br>+                 /*
 need to push an update for this payload */<br>+                  if 
(req_payload.num_slots) {<br>+                            drm_dp_create_payload_step1(mgr, i + 
1, &req_payload);<br>+                                mgr->payloads[i].num_slots = 
req_payload.num_slots;<br>+                       } else if (mgr->payloads[i].num_slots) {<br>+
                                mgr->payloads[i].num_slots = 0;<br>+                           
drm_dp_destroy_payload_step1(mgr, port, i + 1, 
&mgr->payloads[i]);<br>+                           req_payload.payload_state = 
mgr->payloads[i].payload_state;<br>+                   }<br>+                    
mgr->payloads[i].start_slot = req_payload.start_slot;<br>+                     
mgr->payloads[i].payload_state = req_payload.payload_state;<br>+               }<br>+
                cur_slots += req_payload.num_slots;<br>+  }<br>+    
mutex_unlock(&mgr->payload_lock);<br>+<br>+  return 0;<br>+}<br>+EXPORT_SYMBOL(drm_dp_update_payload_part1);<br>+<br>+/**<br>+
 * drm_dp_update_payload_part2() - Execute payload update part 2<br>+ * 
@mgr: manager to use.<br>+ *<br>+ * This iterates over all proposed 
virtual channels, and tries to<br>+ * allocate space in the link for 
them. For 0->slots transitions,<br>+ * this step writes the remote VC
 payload commands. For slots->0<br>+ * this just resets some internal
 state.<br>+ */<br>+int drm_dp_update_payload_part2(struct 
drm_dp_mst_topology_mgr *mgr)<br>+{<br>+    struct drm_dp_mst_port *port;<br>+
        int i;<br>+       int ret;<br>+     mutex_lock(&mgr->payload_lock);<br>+       
for (i = 0; i < mgr->max_payloads; i++) {<br>+<br>+           if 
(!mgr->proposed_vcpis[i])<br>+                 continue;<br>+<br>+         port = 
container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi);<br>+<br>+
                DRM_DEBUG_KMS("payload %d %d\n", i, 
mgr->payloads[i].payload_state);<br>+          if 
(mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL) {<br>+                    ret = 
drm_dp_create_payload_step2(mgr, port, i + 1, &mgr->payloads[i]);<br>+
                } else if (mgr->payloads[i].payload_state == 
DP_PAYLOAD_DELETE_LOCAL) {<br>+                   ret = 
drm_dp_destroy_payload_step2(mgr, i + 1, &mgr->payloads[i]);<br>+
                }<br>+            if (ret) {<br>+                   mutex_unlock(&mgr->payload_lock);<br>+
                        return ret;<br>+          }<br>+    }<br>+    
mutex_unlock(&mgr->payload_lock);<br>+     return 0;<br>+}<br>+EXPORT_SYMBOL(drm_dp_update_payload_part2);<br>+<br>+#if
 0 /* unused as of yet */<br>+static int drm_dp_send_dpcd_read(struct 
drm_dp_mst_topology_mgr *mgr,<br>+                                 struct drm_dp_mst_port *port,<br>+
                                 int offset, int size)<br>+{<br>+   int len;<br>+     struct 
drm_dp_sideband_msg_tx *txmsg;<br>+<br>+    txmsg = kzalloc(sizeof(*txmsg),
 GFP_KERNEL);<br>+        if (!txmsg)<br>+          return -ENOMEM;<br>+<br>+   len = 
build_dpcd_read(txmsg, port->port_num, 0, 8);<br>+     txmsg->dst = 
port->parent;<br>+<br>+  drm_dp_queue_down_tx(mgr, txmsg);<br>+<br>+ 
return 0;<br>+}<br>+#endif<br>+<br>+static int 
drm_dp_send_dpcd_write(struct drm_dp_mst_topology_mgr *mgr,<br>+                            
struct drm_dp_mst_port *port,<br>+                                  int offset, int size, u8 *bytes)<br>+{<br>+
        int len;<br>+     int ret;<br>+     struct drm_dp_sideband_msg_tx *txmsg;<br>+        
struct drm_dp_mst_branch *mstb;<br>+<br>+   mstb = 
drm_dp_get_validated_mstb_ref(mgr, port->parent);<br>+ if (!mstb)<br>+
                return -EINVAL;<br>+<br>+   txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);<br>+
        if (!txmsg) {<br>+                ret = -ENOMEM;<br>+               goto fail_put;<br>+       }<br>+<br>+
        len = build_dpcd_write(txmsg, port->port_num, offset, size, bytes);<br>+
        txmsg->dst = mstb;<br>+<br>+     drm_dp_queue_down_tx(mgr, txmsg);<br>+<br>+
        ret = drm_dp_mst_wait_tx_reply(mstb, txmsg);<br>+ if (ret > 0) {<br>+
                if (txmsg->reply.reply_type == 1) {<br>+                       ret = -EINVAL;<br>+               } 
else<br>+                 ret = 0;<br>+     }<br>+    kfree(txmsg);<br>+fail_put:<br>+    
drm_dp_put_mst_branch_device(mstb);<br>+  return ret;<br>+}<br>+<br>+static
 int drm_dp_encode_up_ack_reply(struct drm_dp_sideband_msg_tx *msg, u8 
req_type)<br>+{<br>+        struct drm_dp_sideband_msg_reply_body reply;<br>+<br>+
        reply.reply_type = 1;<br>+        reply.req_type = req_type;<br>+   
drm_dp_encode_sideband_reply(&reply, msg);<br>+       return 0;<br>+}<br>+<br>+static
 int drm_dp_send_up_ack_reply(struct drm_dp_mst_topology_mgr *mgr,<br>+   
                            struct drm_dp_mst_branch *mstb,<br>+                              int req_type, int 
seqno, bool broadcast)<br>+{<br>+   struct drm_dp_sideband_msg_tx *txmsg;<br>+<br>+
        txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);<br>+ if (!txmsg)<br>+          
return -ENOMEM;<br>+<br>+   txmsg->dst = mstb;<br>+        txmsg->seqno = 
seqno;<br>+       drm_dp_encode_up_ack_reply(txmsg, req_type);<br>+<br>+      
mutex_lock(&mgr->qlock);<br>+      list_add_tail(&txmsg->next, 
&mgr->tx_msg_upq);<br>+    if (!mgr->tx_up_in_progress) {<br>+            
process_single_up_tx_qlock(mgr);<br>+     }<br>+    
mutex_unlock(&mgr->qlock);<br>+    return 0;<br>+}<br>+<br>+static 
int drm_dp_get_vc_payload_bw(int dp_link_bw, int dp_link_count)<br>+{<br>+
        switch (dp_link_bw) {<br>+        case DP_LINK_BW_1_62:<br>+                return 3 * 
dp_link_count;<br>+       case DP_LINK_BW_2_7:<br>+         return 5 * dp_link_count;<br>+
        case DP_LINK_BW_5_4:<br>+         return 10 * dp_link_count;<br>+   }<br>+    
return 0;<br>+}<br>+<br>+/**<br>+ * drm_dp_mst_topology_mgr_set_mst() - 
Set the MST state for a topology manager<br>+ * @mgr: manager to set 
state for<br>+ * @mst_state: true to enable MST on this connector - 
false to disable.<br>+ *<br>+ * This is called by the driver when it 
detects an MST capable device plugged<br>+ * into a DP MST capable port,
 or when a DP MST capable device is unplugged.<br>+ */<br>+int 
drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, 
bool mst_state)<br>+{<br>+  int ret = 0;<br>+ struct drm_dp_mst_branch 
*mstb = NULL;<br>+<br>+     mutex_lock(&mgr->lock);<br>+       if 
(mst_state == mgr->mst_state)<br>+             goto out_unlock;<br>+<br>+  
mgr->mst_state = mst_state;<br>+       /* set the device into MST mode */<br>+
        if (mst_state) {<br>+             WARN_ON(mgr->mst_primary);<br>+<br>+             /* get 
dpcd info */<br>+         mutex_lock(&mgr->aux_lock);<br>+           ret = 
drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd, 
DP_RECEIVER_CAP_SIZE);<br>+               mutex_unlock(&mgr->aux_lock);<br>+         
if (ret != DP_RECEIVER_CAP_SIZE) {<br>+                   DRM_DEBUG_KMS("failed to read 
DPCD\n");<br>+                       goto out_unlock;<br>+             }<br>+<br>+         mgr->pbn_div = 
drm_dp_get_vc_payload_bw(mgr->dpcd[1], mgr->dpcd[2] & 
DP_MAX_LANE_COUNT_MASK);<br>+             mgr->total_pbn = 2560;<br>+            
mgr->total_slots = DIV_ROUND_UP(mgr->total_pbn, mgr->pbn_div);<br>+
                mgr->avail_slots = mgr->total_slots;<br>+<br>+                /* add initial 
branch device at LCT 1 */<br>+            mstb = drm_dp_add_mst_branch_device(1, 
NULL);<br>+               if (mstb == NULL) {<br>+                  ret = -ENOMEM;<br>+                       goto 
out_unlock;<br>+          }<br>+            mstb->mgr = mgr;<br>+<br>+               /* give this 
the main reference */<br>+                mgr->mst_primary = mstb;<br>+          
kref_get(&mgr->mst_primary->kref);<br>+<br>+              {<br>+                    struct 
drm_dp_payload reset_pay;<br>+                    reset_pay.start_slot = 0;<br>+                    
reset_pay.num_slots = 0x3f;<br>+                  drm_dp_dpcd_write_payload(mgr, 0, 
&reset_pay);<br>+             }<br>+<br>+         mutex_lock(&mgr->aux_lock);<br>+
                ret = drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL,<br>+                                   
DP_MST_EN | DP_UP_REQ_EN | DP_UPSTREAM_IS_SRC);<br>+              
mutex_unlock(&mgr->aux_lock);<br>+         if (ret < 0) {<br>+                    goto
 out_unlock;<br>+         }<br>+<br>+<br>+              /* sort out guid */<br>+          
mutex_lock(&mgr->aux_lock);<br>+           ret = 
drm_dp_dpcd_read(mgr->aux, DP_GUID, mgr->guid, 16);<br>+            
mutex_unlock(&mgr->aux_lock);<br>+         if (ret != 16) {<br>+                     
DRM_DEBUG_KMS("failed to read DP GUID %d\n", ret);<br>+                 goto 
out_unlock;<br>+          }<br>+<br>+         mgr->guid_valid = 
drm_dp_validate_guid(mgr, mgr->guid);<br>+             if (!mgr->guid_valid) {<br>+
                        ret = drm_dp_dpcd_write(mgr->aux, DP_GUID, mgr->guid, 16);<br>+
                        mgr->guid_valid = true;<br>+           }<br>+<br>+         
queue_work(system_long_wq, &mgr->work);<br>+<br>+            ret = 0;<br>+     }
 else {<br>+              /* disable MST on the device */<br>+              mstb = 
mgr->mst_primary;<br>+         mgr->mst_primary = NULL;<br>+          /* this can
 fail if the device is gone */<br>+               drm_dp_dpcd_writeb(mgr->aux, 
DP_MSTM_CTRL, 0);<br>+            ret = 0;<br>+             memset(mgr->payloads, 0, 
mgr->max_payloads * sizeof(struct drm_dp_payload));<br>+               
mgr->payload_mask = 0;<br>+            set_bit(0, &mgr->payload_mask);<br>+
        }<br>+<br>+out_unlock:<br>+   mutex_unlock(&mgr->lock);<br>+     if 
(mstb)<br>+               drm_dp_put_mst_branch_device(mstb);<br>+  return ret;<br>+<br>+}<br>+EXPORT_SYMBOL(drm_dp_mst_topology_mgr_set_mst);<br>+<br>+/**<br>+
 * drm_dp_mst_topology_mgr_suspend() - suspend the MST manager<br>+ * 
@mgr: manager to suspend<br>+ *<br>+ * This function tells the MST 
device that we can't handle UP messages<br>+ * anymore. This should stop
 it from sending any since we are suspended.<br>+ */<br>+void 
drm_dp_mst_topology_mgr_suspend(struct drm_dp_mst_topology_mgr *mgr)<br>+{<br>+
        mutex_lock(&mgr->lock);<br>+       mutex_lock(&mgr->aux_lock);<br>+
        drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL,<br>+                           DP_MST_EN | 
DP_UPSTREAM_IS_SRC);<br>+ mutex_unlock(&mgr->aux_lock);<br>+ 
mutex_unlock(&mgr->lock);<br>+}<br>+EXPORT_SYMBOL(drm_dp_mst_topology_mgr_suspend);<br>+<br>+/**<br>+
 * drm_dp_mst_topology_mgr_resume() - resume the MST manager<br>+ * 
@mgr: manager to resume<br>+ *<br>+ * This will fetch DPCD and see if 
the device is still there,<br>+ * if it is, it will rewrite the MSTM 
control bits, and return.<br>+ *<br>+ * if the device fails this returns
 -1, and the driver should do<br>+ * a full MST reprobe, in case we were
 undocked.<br>+ */<br>+int drm_dp_mst_topology_mgr_resume(struct 
drm_dp_mst_topology_mgr *mgr)<br>+{<br>+    int ret = 0;<br>+<br>+      
mutex_lock(&mgr->lock);<br>+<br>+    if (mgr->mst_primary) {<br>+
                int sret;<br>+            mutex_lock(&mgr->aux_lock);<br>+           sret = 
drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd, 
DP_RECEIVER_CAP_SIZE);<br>+               mutex_unlock(&mgr->aux_lock);<br>+         
if (sret != DP_RECEIVER_CAP_SIZE) {<br>+                  DRM_DEBUG_KMS("dpcd read 
failed - undocked during suspend?\n");<br>+                  ret = -1;<br>+                    goto 
out_unlock;<br>+          }<br>+<br>+         mutex_lock(&mgr->aux_lock);<br>+           
ret = drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL,<br>+                                   DP_MST_EN |
 DP_UP_REQ_EN | DP_UPSTREAM_IS_SRC);<br>+         
mutex_unlock(&mgr->aux_lock);<br>+         if (ret < 0) {<br>+                    
DRM_DEBUG_KMS("mst write failed - undocked during suspend?\n");<br>+                    
ret = -1;<br>+                    goto out_unlock;<br>+             }<br>+            ret = 0;<br>+     } else<br>+
                ret = -1;<br>+<br>+out_unlock:<br>+   mutex_unlock(&mgr->lock);<br>+
        return ret;<br>+}<br>+EXPORT_SYMBOL(drm_dp_mst_topology_mgr_resume);<br>+<br>+static
 void drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool 
up)<br>+{<br>+      int len;<br>+     u8 replyblock[32];<br>+   int replylen, 
origlen, curreply;<br>+   int ret;<br>+     struct drm_dp_sideband_msg_rx 
*msg;<br>+        int basereg = up ? DP_SIDEBAND_MSG_UP_REQ_BASE : 
DP_SIDEBAND_MSG_DOWN_REP_BASE;<br>+       msg = up ? &mgr->up_req_recv :
 &mgr->down_rep_recv;<br>+<br>+      len = 
min(mgr->max_dpcd_transaction_bytes, 16);<br>+ 
mutex_lock(&mgr->aux_lock);<br>+   ret = 
drm_dp_dpcd_read(mgr->aux, basereg,<br>+                              replyblock, len);<br>+
        mutex_unlock(&mgr->aux_lock);<br>+ if (ret != len) {<br>+            
DRM_DEBUG_KMS("failed to read DPCD down rep %d %d\n", len, ret);<br>+           
return;<br>+      }<br>+    ret = drm_dp_sideband_msg_build(msg, replyblock, 
len, true);<br>+  if (!ret) {<br>+          DRM_DEBUG_KMS("sideband msg build 
failed %d\n", replyblock[0]);<br>+           return;<br>+      }<br>+    replylen = 
msg->curchunk_len + msg->curchunk_hdrlen;<br>+<br>+   origlen = 
replylen;<br>+    replylen -= len;<br>+     curreply = len;<br>+      while 
(replylen > 0) {<br>+          len = min3(replylen, 
mgr->max_dpcd_transaction_bytes, 16);<br>+             
mutex_lock(&mgr->aux_lock);<br>+           ret = 
drm_dp_dpcd_read(mgr->aux, basereg + curreply,<br>+                                
replyblock, len);<br>+            mutex_unlock(&mgr->aux_lock);<br>+         if 
(ret != len) {<br>+                       DRM_DEBUG_KMS("failed to read a chunk\n");<br>+         }<br>+
                ret = drm_dp_sideband_msg_build(msg, replyblock, len, false);<br>+                if
 (ret == false)<br>+                      DRM_DEBUG_KMS("failed to build sideband msg\n");<br>+
                curreply += len;<br>+             replylen -= len;<br>+     }<br>+}<br>+<br>+static 
int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)<br>+{<br>+
        int ret = 0;<br>+<br>+      drm_dp_get_one_sb_msg(mgr, false);<br>+<br>+        if 
(mgr->down_rep_recv.have_eomt) {<br>+          struct drm_dp_sideband_msg_tx 
*txmsg;<br>+              struct drm_dp_mst_branch *mstb;<br>+              int slot = -1;<br>+       
        mstb = drm_dp_get_mst_branch_device(mgr,<br>+                                                 
mgr->down_rep_recv.initial_hdr.lct,<br>+                                                   
mgr->down_rep_recv.initial_hdr.rad);<br>+<br>+           if (!mstb) {<br>+                 
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", 
mgr->down_rep_recv.initial_hdr.lct);<br>+                      
memset(&mgr->down_rep_recv, 0, sizeof(struct 
drm_dp_sideband_msg_rx));<br>+                    return 0;<br>+            }<br>+<br>+         /* find 
the message */<br>+               slot = mgr->down_rep_recv.initial_hdr.seqno;<br>+
                mutex_lock(&mgr->qlock);<br>+              txmsg = mstb->tx_slots[slot];<br>+
                /* remove from slots */<br>+              mutex_unlock(&mgr->qlock);<br>+<br>+
                if (!txmsg) {<br>+                        DRM_DEBUG_KMS("Got MST reply with no msg %p %d %d
 %02x %02x\n",<br>+                         mstb,<br>+                        
mgr->down_rep_recv.initial_hdr.seqno,<br>+                            
mgr->down_rep_recv.initial_hdr.lct,<br>+                                     
mgr->down_rep_recv.initial_hdr.rad[0],<br>+                                  
mgr->down_rep_recv.msg[0]);<br>+                       
drm_dp_put_mst_branch_device(mstb);<br>+                  
memset(&mgr->down_rep_recv, 0, sizeof(struct 
drm_dp_sideband_msg_rx));<br>+                    return 0;<br>+            }<br>+<br>+         
drm_dp_sideband_parse_reply(&mgr->down_rep_recv, 
&txmsg->reply);<br>+               if (txmsg->reply.reply_type == 1) {<br>+
                        DRM_DEBUG_KMS("Got NAK reply: req 0x%02x, reason 0x%02x, nak data 
0x%02x\n", txmsg->reply.req_type, txmsg->reply.u.nak.reason, 
txmsg->reply.u.nak.nak_data);<br>+             }<br>+<br>+         
memset(&mgr->down_rep_recv, 0, sizeof(struct 
drm_dp_sideband_msg_rx));<br>+            drm_dp_put_mst_branch_device(mstb);<br>+<br>+
                mutex_lock(&mgr->qlock);<br>+              txmsg->state = 
DRM_DP_SIDEBAND_TX_RX;<br>+               mstb->tx_slots[slot] = NULL;<br>+              
mutex_unlock(&mgr->qlock);<br>+<br>+         
wake_up(&mgr->tx_waitq);<br>+      }<br>+    return ret;<br>+}<br>+<br>+static
 int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)<br>+{<br>+
        int ret = 0;<br>+ drm_dp_get_one_sb_msg(mgr, true);<br>+<br>+ if 
(mgr->up_req_recv.have_eomt) {<br>+            struct 
drm_dp_sideband_msg_req_body msg;<br>+            struct drm_dp_mst_branch *mstb;<br>+
                bool seqno;<br>+          mstb = drm_dp_get_mst_branch_device(mgr,<br>+                                              
   mgr->up_req_recv.initial_hdr.lct,<br>+                                                  
mgr->up_req_recv.initial_hdr.rad);<br>+                if (!mstb) {<br>+                 
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", 
mgr->up_req_recv.initial_hdr.lct);<br>+                        
memset(&mgr->up_req_recv, 0, sizeof(struct 
drm_dp_sideband_msg_rx));<br>+                    return 0;<br>+            }<br>+<br>+         seqno = 
mgr->up_req_recv.initial_hdr.seqno;<br>+               
drm_dp_sideband_parse_req(&mgr->up_req_recv, &msg);<br>+<br>+
                if (msg.req_type == DP_CONNECTION_STATUS_NOTIFY) {<br>+                   
drm_dp_send_up_ack_reply(mgr, mstb, msg.req_type, seqno, false);<br>+                     
drm_dp_update_port(mstb, &msg.u.conn_stat);<br>+                      
DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: 
%d\n", msg.u.conn_stat.port_number, 
msg.u.conn_stat.legacy_device_plug_status, 
msg.u.conn_stat.displayport_device_plug_status, 
msg.u.conn_stat.message_capability_status, msg.u.conn_stat.input_port, 
msg.u.conn_stat.peer_device_type);<br>+                   
(*mgr->cbs->hotplug)(mgr);<br>+<br>+          } else if (msg.req_type == 
DP_RESOURCE_STATUS_NOTIFY) {<br>+                 drm_dp_send_up_ack_reply(mgr, mstb, 
msg.req_type, seqno, false);<br>+                 DRM_DEBUG_KMS("Got RSN: pn: %d 
avail_pbn %d\n", msg.u.resource_stat.port_number, 
msg.u.resource_stat.available_pbn);<br>+          }<br>+<br>+         
drm_dp_put_mst_branch_device(mstb);<br>+          
memset(&mgr->up_req_recv, 0, sizeof(struct 
drm_dp_sideband_msg_rx));<br>+    }<br>+    return ret;<br>+}<br>+<br>+/**<br>+
 * drm_dp_mst_hpd_irq() - MST hotplug IRQ notify<br>+ * @mgr: manager to
 notify irq for.<br>+ * @esi: 4 bytes from SINK_COUNT_ESI<br>+ *<br>+ * 
This should be called from the driver when it detects a short IRQ,<br>+ *
 along with the value of the DEVICE_SERVICE_IRQ_VECTOR_ESI0. The<br>+ * 
topology manager will process the sideband messages received as a result<br>+
 * of this.<br>+ */<br>+int drm_dp_mst_hpd_irq(struct 
drm_dp_mst_topology_mgr *mgr, u8 *esi, bool *handled)<br>+{<br>+    int ret
 = 0;<br>+        int sc;<br>+      *handled = false;<br>+    sc = esi[0] & 0x3f;<br>+
        if (sc != mgr->sink_count) {<br>+<br>+           if (mgr->mst_primary 
&& mgr->sink_count == 0 && sc) {<br>+                  
mgr->mst_primary->link_address_sent = false;<br>+                   
queue_work(system_long_wq, &mgr->work);<br>+               }<br>+            
mgr->sink_count = sc;<br>+             *handled = true;<br>+<br>+  }<br>+<br>+ if
 (esi[1] & DP_DOWN_REP_MSG_RDY) {<br>+                ret = 
drm_dp_mst_handle_down_rep(mgr);<br>+             *handled = true;<br>+     }<br>+<br>+
        if (esi[1] & DP_UP_REQ_MSG_RDY) {<br>+                ret |= 
drm_dp_mst_handle_up_req(mgr);<br>+               *handled = true;<br>+     }<br>+<br>+ 
drm_dp_mst_kick_tx(mgr);<br>+     return ret;<br>+}<br>+EXPORT_SYMBOL(drm_dp_mst_hpd_irq);<br>+<br>+/**<br>+
 * drm_dp_mst_detect_port() - get connection status for an MST port<br>+
 * @mgr: manager for this port<br>+ * @port: unverified pointer to a 
port<br>+ *<br>+ * This returns the current connection state for a port.
 It validates the<br>+ * port pointer still exists so the caller doesn't
 require a reference<br>+ */<br>+enum drm_connector_status 
drm_dp_mst_detect_port(struct drm_dp_mst_topology_mgr *mgr, struct 
drm_dp_mst_port *port)<br>+{<br>+   enum drm_connector_status status = 
connector_status_disconnected;<br>+<br>+    /* we need to search for the 
port in the mgr in case its gone */<br>+  port = 
drm_dp_get_validated_port_ref(mgr, port);<br>+    if (!port)<br>+           return 
connector_status_disconnected;<br>+<br>+    if (!port->ddps)<br>+          goto 
out;<br>+<br>+      switch (port->pdt) {<br>+      case DP_PEER_DEVICE_NONE:<br>+
        case DP_PEER_DEVICE_MST_BRANCHING:<br>+           break;<br>+<br>+    case 
DP_PEER_DEVICE_SST_SINK:<br>+             status = connector_status_connected;<br>+
                break;<br>+       case DP_PEER_DEVICE_DP_LEGACY_CONV:<br>+          if 
(port->ldps)<br>+                      status = connector_status_connected;<br>+         break;<br>+
        }<br>+out:<br>+     drm_dp_put_port(port);<br>+       return status;<br>+}<br>+EXPORT_SYMBOL(drm_dp_mst_detect_port);<br>+<br>+/**<br>+
 * drm_dp_mst_get_edid() - get EDID for an MST port<br>+ * @connector: 
toplevel connector to get EDID for<br>+ * @mgr: manager for this port<br>+
 * @port: unverified pointer to a port.<br>+ *<br>+ * This returns an 
EDID for the port connected to a connector,<br>+ * It validates the 
pointer still exists so the caller doesn't require a<br>+ * reference.<br>+
 */<br>+struct edid *drm_dp_mst_get_edid(struct drm_connector 
*connector, struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port 
*port)<br>+{<br>+   struct edid *edid = NULL;<br>+<br>+ /* we need to 
search for the port in the mgr in case its gone */<br>+   port = 
drm_dp_get_validated_port_ref(mgr, port);<br>+    if (!port)<br>+           return 
NULL;<br>+<br>+     edid = drm_get_edid(connector, &port->aux.ddc);<br>+
        drm_dp_put_port(port);<br>+       return edid;<br>+}<br>+EXPORT_SYMBOL(drm_dp_mst_get_edid);<br>+<br>+/**<br>+
 * drm_dp_find_vcpi_slots() - find slots for this PBN value<br>+ * @mgr:
 manager to use<br>+ * @pbn: payload bandwidth to convert into slots.<br>+
 */<br>+int drm_dp_find_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr,<br>+
                           int pbn)<br>+{<br>+      int num_slots;<br>+<br>+    num_slots = 
DIV_ROUND_UP(pbn, mgr->pbn_div);<br>+<br>+       if (num_slots > 
mgr->avail_slots)<br>+         return -ENOSPC;<br>+      return num_slots;<br>+}<br>+EXPORT_SYMBOL(drm_dp_find_vcpi_slots);<br>+<br>+static
 int drm_dp_init_vcpi(struct drm_dp_mst_topology_mgr *mgr,<br>+                       
struct drm_dp_vcpi *vcpi, int pbn)<br>+{<br>+       int num_slots;<br>+       int 
ret;<br>+<br>+      num_slots = DIV_ROUND_UP(pbn, mgr->pbn_div);<br>+<br>+
        if (num_slots > mgr->avail_slots)<br>+              return -ENOSPC;<br>+<br>+
        vcpi->pbn = pbn;<br>+  vcpi->aligned_pbn = num_slots * 
mgr->pbn_div;<br>+     vcpi->num_slots = num_slots;<br>+<br>+   ret = 
drm_dp_mst_assign_payload_id(mgr, vcpi);<br>+     if (ret < 0)<br>+              
return ret;<br>+  return 0;<br>+}<br>+<br>+/**<br>+ * 
drm_dp_mst_allocate_vcpi() - Allocate a virtual channel<br>+ * @mgr: 
manager for this port<br>+ * @port: port to allocate a virtual channel 
for.<br>+ * @pbn: payload bandwidth number to request<br>+ * @slots: 
returned number of slots for this PBN.<br>+ */<br>+bool 
drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, struct 
drm_dp_mst_port *port, int pbn, int *slots)<br>+{<br>+      int ret;<br>+<br>+
        port = drm_dp_get_validated_port_ref(mgr, port);<br>+     if (!port)<br>+           
return false;<br>+<br>+     if (port->vcpi.vcpi > 0) {<br>+             
DRM_DEBUG_KMS("payload: vcpi %d already allocated for pbn %d - requested
 pbn %d\n", port->vcpi.vcpi, port->vcpi.pbn, pbn);<br>+                if (pbn 
== port->vcpi.pbn) {<br>+                      *slots = port->vcpi.num_slots;<br>+                    
return true;<br>+         }<br>+    }<br>+<br>+ ret = drm_dp_init_vcpi(mgr, 
&port->vcpi, pbn);<br>+    if (ret) {<br>+           DRM_DEBUG_KMS("failed to
 init vcpi %d %d %d\n", DIV_ROUND_UP(pbn, mgr->pbn_div), 
mgr->avail_slots, ret);<br>+           goto out;<br>+    }<br>+    
DRM_DEBUG_KMS("initing vcpi for %d %d\n", pbn, port->vcpi.num_slots);<br>+
        *slots = port->vcpi.num_slots;<br>+<br>+ drm_dp_put_port(port);<br>+
        return true;<br>+out:<br>+  return false;<br>+}<br>+EXPORT_SYMBOL(drm_dp_mst_allocate_vcpi);<br>+<br>+/**<br>+
 * drm_dp_mst_reset_vcpi_slots() - Reset number of slots to 0 for VCPI<br>+
 * @mgr: manager for this port<br>+ * @port: unverified pointer to a 
port.<br>+ *<br>+ * This just resets the number of slots for the ports 
VCPI for later programming.<br>+ */<br>+void 
drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct 
drm_dp_mst_port *port)<br>+{<br>+   port = 
drm_dp_get_validated_port_ref(mgr, port);<br>+    if (!port)<br>+           return;<br>+
        port->vcpi.num_slots = 0;<br>+ drm_dp_put_port(port);<br>+}<br>+EXPORT_SYMBOL(drm_dp_mst_reset_vcpi_slots);<br>+<br>+/**<br>+
 * drm_dp_mst_deallocate_vcpi() - deallocate a VCPI<br>+ * @mgr: manager
 for this port<br>+ * @port: unverified port to deallocate vcpi for<br>+
 */<br>+void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr 
*mgr, struct drm_dp_mst_port *port)<br>+{<br>+      port = 
drm_dp_get_validated_port_ref(mgr, port);<br>+    if (!port)<br>+           return;<br>+<br>+
        drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi);<br>+  
port->vcpi.num_slots = 0;<br>+ port->vcpi.pbn = 0;<br>+       
port->vcpi.aligned_pbn = 0;<br>+       port->vcpi.vcpi = 0;<br>+      
drm_dp_put_port(port);<br>+}<br>+EXPORT_SYMBOL(drm_dp_mst_deallocate_vcpi);<br>+<br>+static
 int drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *mgr,<br>+
                                     int id, struct drm_dp_payload *payload)<br>+{<br>+     u8 
payload_alloc[3], status;<br>+    int ret;<br>+     int retries = 0;<br>+<br>+  
mutex_lock(&mgr->aux_lock);<br>+   drm_dp_dpcd_writeb(mgr->aux, 
DP_PAYLOAD_TABLE_UPDATE_STATUS,<br>+                         DP_PAYLOAD_TABLE_UPDATED);<br>+
        mutex_unlock(&mgr->aux_lock);<br>+<br>+      payload_alloc[0] = id;<br>+
        payload_alloc[1] = payload->start_slot;<br>+   payload_alloc[2] = 
payload->num_slots;<br>+<br>+    mutex_lock(&mgr->aux_lock);<br>+
        ret = drm_dp_dpcd_write(mgr->aux, DP_PAYLOAD_ALLOCATE_SET, 
payload_alloc, 3);<br>+   mutex_unlock(&mgr->aux_lock);<br>+ if 
(ret != 3) {<br>+         DRM_DEBUG_KMS("failed to write payload allocation 
%d\n", ret);<br>+            goto fail;<br>+   }<br>+<br>+retry:<br>+        
mutex_lock(&mgr->aux_lock);<br>+   ret = 
drm_dp_dpcd_readb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, 
&status);<br>+        mutex_unlock(&mgr->aux_lock);<br>+ if (ret 
< 0) {<br>+            DRM_DEBUG_KMS("failed to read payload table status 
%d\n", ret);<br>+            goto fail;<br>+   }<br>+<br>+ if (!(status & 
DP_PAYLOAD_TABLE_UPDATED)) {<br>+         retries++;<br>+           if (retries < 20)
 {<br>+                   usleep_range(10000, 20000);<br>+                  goto retry;<br>+          }<br>+            
DRM_DEBUG_KMS("status not set after read payload table status %d\n", 
status);<br>+             ret = -EINVAL;<br>+               goto fail;<br>+   }<br>+    ret = 0;<br>+fail:<br>+
        return ret;<br>+}<br>+<br>+<br>+/**<br>+ * drm_dp_check_act_status() - 
Check ACT handled status.<br>+ * @mgr: manager to use<br>+ *<br>+ * 
Check the payload status bits in the DPCD for ACT handled completion.<br>+
 */<br>+int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr)<br>+{<br>+
        u8 status;<br>+   int ret;<br>+     int count = 0;<br>+<br>+    do {<br>+         
mutex_lock(&mgr->aux_lock);<br>+           ret = 
drm_dp_dpcd_readb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, 
&status);<br>+                mutex_unlock(&mgr->aux_lock);<br>+<br>+              if 
(ret < 0) {<br>+                       DRM_DEBUG_KMS("failed to read payload table status
 %d\n", ret);<br>+                   goto fail;<br>+           }<br>+<br>+         if (status & 
DP_PAYLOAD_ACT_HANDLED)<br>+                      break;<br>+               count++;<br>+             udelay(100);<br>+<br>+
        } while (count < 30);<br>+<br>+  if (!(status & 
DP_PAYLOAD_ACT_HANDLED)) {<br>+           DRM_DEBUG_KMS("failed to get ACT bit %d
 after %d retries\n", status, count);<br>+           ret = -EINVAL;<br>+               goto 
fail;<br>+        }<br>+    return 0;<br>+fail:<br>+    return ret;<br>+}<br>+EXPORT_SYMBOL(drm_dp_check_act_status);<br>+<br>+/**<br>+
 * drm_dp_calc_pbn_mode() - Calculate the PBN for a mode.<br>+ * @clock:
 dot clock for the mode<br>+ * @bpp: bpp for the mode.<br>+ *<br>+ * 
This uses the formula in the spec to calculate the PBN value for a mode.<br>+
 */<br>+int drm_dp_calc_pbn_mode(int clock, int bpp)<br>+{<br>+       
fixed20_12 pix_bw;<br>+   fixed20_12 fbpp;<br>+     fixed20_12 result;<br>+   
fixed20_12 margin, tmp;<br>+      u32 res;<br>+<br>+  pix_bw.full = 
dfixed_const(clock);<br>+ fbpp.full = dfixed_const(bpp);<br>+       tmp.full =
 dfixed_const(8);<br>+    fbpp.full = dfixed_div(fbpp, tmp);<br>+<br>+        
result.full = dfixed_mul(pix_bw, fbpp);<br>+      margin.full = 
dfixed_const(54);<br>+    tmp.full = dfixed_const(64);<br>+ margin.full = 
dfixed_div(margin, tmp);<br>+     result.full = dfixed_div(result, margin);<br>+<br>+
        margin.full = dfixed_const(1006);<br>+    tmp.full = dfixed_const(1000);<br>+
        margin.full = dfixed_div(margin, tmp);<br>+       result.full = 
dfixed_mul(result, margin);<br>+<br>+       result.full = dfixed_div(result, 
tmp);<br>+        result.full = dfixed_ceil(result);<br>+   res = 
dfixed_trunc(result);<br>+        return res;<br>+}<br>+EXPORT_SYMBOL(drm_dp_calc_pbn_mode);<br>+<br>+static
 int test_calc_pbn_mode(void)<br>+{<br>+    int ret;<br>+     ret = 
drm_dp_calc_pbn_mode(154000, 30);<br>+    if (ret != 689)<br>+              return 
-EINVAL;<br>+     ret = drm_dp_calc_pbn_mode(234000, 30);<br>+      if (ret != 
1047)<br>+                return -EINVAL;<br>+      return 0;<br>+}<br>+<br>+/* we want to 
kick the TX after we've ack the up/down IRQs. */<br>+static void 
drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr)<br>+{<br>+  
queue_work(system_long_wq, &mgr->tx_work);<br>+}<br>+<br>+static 
void drm_dp_mst_dump_mstb(struct seq_file *m,<br>+                                 struct 
drm_dp_mst_branch *mstb)<br>+{<br>+ struct drm_dp_mst_port *port;<br>+        
int tabs = mstb->lct;<br>+     char prefix[10];<br>+     int i;<br>+<br>+    for
 (i = 0; i < tabs; i++)<br>+           prefix[i] = '\t';<br>+    prefix[i] = 
'\0';<br>+<br>+     seq_printf(m, "%smst: %p, %d\n", prefix, mstb, 
mstb->num_ports);<br>+ list_for_each_entry(port, &mstb->ports,
 next) {<br>+             seq_printf(m, "%sport: %d: ddps: %d ldps: %d, %p, conn: 
%p\n", prefix, port->port_num, port->ddps, port->ldps, port, 
port->connector);<br>+         if (port->mstb)<br>+                   
drm_dp_mst_dump_mstb(m, port->mstb);<br>+      }<br>+}<br>+<br>+static 
bool dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr,<br>+                              
  char *buf)<br>+{<br>+     int ret;<br>+     int i;<br>+       
mutex_lock(&mgr->aux_lock);<br>+   for (i = 0; i < 4; i++) {<br>+
                ret = drm_dp_dpcd_read(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS + 
(i * 16), &buf[i * 16], 16);<br>+             if (ret != 16)<br>+                       break;<br>+
        }<br>+    mutex_unlock(&mgr->aux_lock);<br>+ if (i == 4)<br>+          
return true;<br>+ return false;<br>+}<br>+<br>+/**<br>+ * 
drm_dp_mst_dump_topology(): dump topology to seq file.<br>+ * @m: 
seq_file to dump output to<br>+ * @mgr: manager to dump current topology
 for.<br>+ *<br>+ * helper to dump MST topology to a seq file for 
debugfs.<br>+ */<br>+void drm_dp_mst_dump_topology(struct seq_file *m,<br>+
                              struct drm_dp_mst_topology_mgr *mgr)<br>+{<br>+       int i;<br>+       
struct drm_dp_mst_port *port;<br>+        mutex_lock(&mgr->lock);<br>+       
if (mgr->mst_primary)<br>+             drm_dp_mst_dump_mstb(m, 
mgr->mst_primary);<br>+<br>+     /* dump VCPIs */<br>+     
mutex_unlock(&mgr->lock);<br>+<br>+  
mutex_lock(&mgr->payload_lock);<br>+       seq_printf(m, "vcpi: %lx\n",
 mgr->payload_mask);<br>+<br>+   for (i = 0; i < 
mgr->max_payloads; i++) {<br>+         if (mgr->proposed_vcpis[i]) {<br>+
                        port = container_of(mgr->proposed_vcpis[i], struct 
drm_dp_mst_port, vcpi);<br>+                      seq_printf(m, "vcpi %d: %d %d %d\n", i, 
port->port_num, port->vcpi.vcpi, port->vcpi.num_slots);<br>+             }
 else<br>+                        seq_printf(m, "vcpi %d:unsed\n", i);<br>+       }<br>+    for (i = 
0; i < mgr->max_payloads; i++) {<br>+               seq_printf(m, "payload %d: 
%d, %d, %d\n",<br>+                     i,<br>+                           mgr->payloads[i].payload_state,<br>+
                           mgr->payloads[i].start_slot,<br>+                      
mgr->payloads[i].num_slots);<br>+<br>+<br>+        }<br>+    
mutex_unlock(&mgr->payload_lock);<br>+<br>+  
mutex_lock(&mgr->lock);<br>+       if (mgr->mst_primary) {<br>+           u8 
buf[64];<br>+             bool bret;<br>+           int ret;<br>+             ret = 
drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, buf, DP_RECEIVER_CAP_SIZE);<br>+
                seq_printf(m, "dpcd: ");<br>+           for (i = 0; i < 
DP_RECEIVER_CAP_SIZE; i++)<br>+                   seq_printf(m, "%02x ", buf[i]);<br>+            
seq_printf(m, "\n");<br>+               ret = drm_dp_dpcd_read(mgr->aux, 
DP_FAUX_CAP, buf, 2);<br>+                seq_printf(m, "faux/mst: ");<br>+               for (i = 
0; i < 2; i++)<br>+                    seq_printf(m, "%02x ", buf[i]);<br>+            
seq_printf(m, "\n");<br>+               ret = drm_dp_dpcd_read(mgr->aux, 
DP_MSTM_CTRL, buf, 1);<br>+               seq_printf(m, "mst ctrl: ");<br>+               for (i =
 0; i < 1; i++)<br>+                   seq_printf(m, "%02x ", buf[i]);<br>+            
seq_printf(m, "\n");<br>+<br>+            bret = dump_dp_payload_table(mgr, buf);<br>+
                if (bret == true) {<br>+                  seq_printf(m, "payload table: ");<br>+                  
for (i = 0; i < 63; i++)<br>+                          seq_printf(m, "%02x ", buf[i]);<br>+
                        seq_printf(m, "\n");<br>+               }<br>+<br>+ }<br>+<br>+ 
mutex_unlock(&mgr->lock);<br>+<br>+}<br>+EXPORT_SYMBOL(drm_dp_mst_dump_topology);<br>+<br>+static
 void drm_dp_tx_work(struct work_struct *work)<br>+{<br>+   struct 
drm_dp_mst_topology_mgr *mgr = container_of(work, struct 
drm_dp_mst_topology_mgr, tx_work);<br>+<br>+        
mutex_lock(&mgr->qlock);<br>+      if (mgr->tx_down_in_progress)<br>+
                process_single_down_tx_qlock(mgr);<br>+   
mutex_unlock(&mgr->qlock);<br>+}<br>+<br>+/**<br>+ * 
drm_dp_mst_topology_mgr_init - initialise a topology manager<br>+ * 
@mgr: manager struct to initialise<br>+ * @dev: device providing this 
structure - for i2c addition.<br>+ * @aux: DP helper aux channel to talk
 to this device<br>+ * @max_dpcd_transaction_bytes: hw specific DPCD 
transaction limit<br>+ * @max_payloads: maximum number of payloads this 
GPU can source<br>+ * @conn_base_id: the connector object ID the MST 
device is connected to.<br>+ *<br>+ * Return 0 for success, or negative 
error code on failure<br>+ */<br>+int 
drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,<br>+            
                 struct device *dev, struct drm_dp_aux *aux,<br>+                          int 
max_dpcd_transaction_bytes,<br>+                           int max_payloads, int conn_base_id)<br>+{<br>+
        mutex_init(&mgr->lock);<br>+       mutex_init(&mgr->qlock);<br>+
        mutex_init(&mgr->aux_lock);<br>+   
mutex_init(&mgr->payload_lock);<br>+       
INIT_LIST_HEAD(&mgr->tx_msg_upq);<br>+     
INIT_LIST_HEAD(&mgr->tx_msg_downq);<br>+   
INIT_WORK(&mgr->work, drm_dp_mst_link_probe_work);<br>+    
INIT_WORK(&mgr->tx_work, drm_dp_tx_work);<br>+     
init_waitqueue_head(&mgr->tx_waitq);<br>+  mgr->dev = dev;<br>+
        mgr->aux = aux;<br>+   mgr->max_dpcd_transaction_bytes = 
max_dpcd_transaction_bytes;<br>+  mgr->max_payloads = max_payloads;<br>+
        mgr->conn_base_id = conn_base_id;<br>+ mgr->payloads = 
kcalloc(max_payloads, sizeof(struct drm_dp_payload), GFP_KERNEL);<br>+    
if (!mgr->payloads)<br>+               return -ENOMEM;<br>+      mgr->proposed_vcpis
 = kcalloc(max_payloads, sizeof(struct drm_dp_vcpi *), GFP_KERNEL);<br>+
        if (!mgr->proposed_vcpis)<br>+         return -ENOMEM;<br>+      set_bit(0, 
&mgr->payload_mask);<br>+  test_calc_pbn_mode();<br>+        return 0;<br>+}<br>+EXPORT_SYMBOL(drm_dp_mst_topology_mgr_init);<br>+<br>+/**<br>+
 * drm_dp_mst_topology_mgr_destroy() - destroy topology manager.<br>+ * 
@mgr: manager to destroy<br>+ */<br>+void 
drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr)<br>+{<br>+
        mutex_lock(&mgr->payload_lock);<br>+       kfree(mgr->payloads);<br>+
        mgr->payloads = NULL;<br>+     kfree(mgr->proposed_vcpis);<br>+       
mgr->proposed_vcpis = NULL;<br>+       
mutex_unlock(&mgr->payload_lock);<br>+     mgr->dev = NULL;<br>+  
mgr->aux = NULL;<br>+}<br>+EXPORT_SYMBOL(drm_dp_mst_topology_mgr_destroy);<br>+<br>+/*
 I2C device */<br>+static int drm_dp_mst_i2c_xfer(struct i2c_adapter 
*adapter, struct i2c_msg *msgs,<br>+                             int num)<br>+{<br>+  struct
 drm_dp_aux *aux = adapter->algo_data;<br>+    struct drm_dp_mst_port 
*port = container_of(aux, struct drm_dp_mst_port, aux);<br>+      struct 
drm_dp_mst_branch *mstb;<br>+     struct drm_dp_mst_topology_mgr *mgr = 
port->mgr;<br>+        unsigned int i;<br>+      bool reading = false;<br>+        
struct drm_dp_sideband_msg_req_body msg;<br>+     struct 
drm_dp_sideband_msg_tx *txmsg = NULL;<br>+        int ret;<br>+<br>+  mstb = 
drm_dp_get_validated_mstb_ref(mgr, port->parent);<br>+ if (!mstb)<br>+
                return -EREMOTEIO;<br>+<br>+        /* construct i2c msg */<br>+      /* see if 
last msg is a read */<br>+        if (msgs[num - 1].flags & I2C_M_RD)<br>+      
        reading = true;<br>+<br>+   if (!reading) {<br>+              
DRM_DEBUG_KMS("Unsupported I2C transaction for MST device\n");<br>+             ret
 = -EIO;<br>+             goto out;<br>+    }<br>+<br>+ msg.req_type = 
DP_REMOTE_I2C_READ;<br>+  msg.u.i2c_read.num_transactions = num - 1;<br>+
        msg.u.i2c_read.port_number = port->port_num;<br>+      for (i = 0; i <
 num - 1; i++) {<br>+             msg.u.i2c_read.transactions[i].i2c_dev_id = 
msgs[i].addr;<br>+                msg.u.i2c_read.transactions[i].num_bytes = 
msgs[i].len;<br>+         memcpy(&msg.u.i2c_read.transactions[i].bytes, 
msgs[i].buf, msgs[i].len);<br>+   }<br>+    msg.u.i2c_read.read_i2c_device_id
 = msgs[num - 1].addr;<br>+       msg.u.i2c_read.num_bytes_read = msgs[num - 
1].len;<br>+<br>+   txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);<br>+ if 
(!txmsg) {<br>+           ret = -ENOMEM;<br>+               goto out;<br>+    }<br>+<br>+ 
txmsg->dst = mstb;<br>+        drm_dp_encode_sideband_req(&msg, txmsg);<br>+<br>+
        drm_dp_queue_down_tx(mgr, txmsg);<br>+<br>+ ret = 
drm_dp_mst_wait_tx_reply(mstb, txmsg);<br>+       if (ret > 0) {<br>+<br>+ 
        if (txmsg->reply.reply_type == 1) { /* got a NAK back */<br>+                  ret =
 -EREMOTEIO;<br>+                 goto out;<br>+            }<br>+            if 
(txmsg->reply.u.remote_i2c_read_ack.num_bytes != msgs[num - 1].len) {<br>+
                        ret = -EIO;<br>+                  goto out;<br>+            }<br>+            memcpy(msgs[num - 1].buf, 
txmsg->reply.u.remote_i2c_read_ack.bytes, msgs[num - 1].len);<br>+             
ret = num;<br>+   }<br>+out:<br>+     kfree(txmsg);<br>+        
drm_dp_put_mst_branch_device(mstb);<br>+  return ret;<br>+}<br>+<br>+static
 u32 drm_dp_mst_i2c_functionality(struct i2c_adapter *adapter)<br>+{<br>+
        return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL |<br>+         
I2C_FUNC_SMBUS_READ_BLOCK_DATA |<br>+            
I2C_FUNC_SMBUS_BLOCK_PROC_CALL |<br>+            I2C_FUNC_10BIT_ADDR;<br>+}<br>+<br>+static
 const struct i2c_algorithm drm_dp_mst_i2c_algo = {<br>+  .functionality =
 drm_dp_mst_i2c_functionality,<br>+       .master_xfer = drm_dp_mst_i2c_xfer,<br>+};<br>+<br>+/**<br>+
 * drm_dp_mst_register_i2c_bus() - register an I2C adapter for 
I2C-over-AUX<br>+ * @aux: DisplayPort AUX channel<br>+ *<br>+ * Returns 0
 on success or a negative error code on failure.<br>+ */<br>+static int 
drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux)<br>+{<br>+      
aux->ddc.algo = &drm_dp_mst_i2c_algo;<br>+ aux->ddc.algo_data =
 aux;<br>+        aux->ddc.retries = 3;<br>+<br>+  aux->ddc.class = 
I2C_CLASS_DDC;<br>+       aux->ddc.owner = THIS_MODULE;<br>+     
aux->ddc.dev.parent = aux->dev;<br>+        aux->ddc.dev.of_node = 
aux->dev->of_node;<br>+<br>+  strlcpy(aux->ddc.name, 
aux->name ? aux->name : dev_name(aux->dev),<br>+         
sizeof(aux->ddc.name));<br>+<br>+        return 
i2c_add_adapter(&aux->ddc);<br>+}<br>+<br>+/**<br>+ * 
drm_dp_mst_unregister_i2c_bus() - unregister an I2C-over-AUX adapter<br>+
 * @aux: DisplayPort AUX channel<br>+ */<br>+static void 
drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux)<br>+{<br>+    
i2c_del_adapter(&aux->ddc);<br>+}<br>diff --git 
a/include/drm/drm_dp_mst_helper.h b/include/drm/drm_dp_mst_helper.h<br>new
 file mode 100644<br>index 0000000..6626d1b<br>--- /dev/null<br>+++ 
b/include/drm/drm_dp_mst_helper.h<br>@@ -0,0 +1,507 @@<br>+/*<br>+ * 
Copyright © 2014 Red Hat.<br>+ *<br>+ * Permission to use, copy, modify,
 distribute, and sell this software and its<br>+ * documentation for any
 purpose is hereby granted without fee, provided that<br>+ * the above 
copyright notice appear in all copies and that both that copyright<br>+ *
 notice and this permission notice appear in supporting documentation, 
and<br>+ * that the name of the copyright holders not be used in 
advertising or<br>+ * publicity pertaining to distribution of the 
software without specific,<br>+ * written prior permission.  The 
copyright holders make no representations<br>+ * about the suitability 
of this software for any purpose.  It is provided "as<br>+ * is" without
 express or implied warranty.<br>+ *<br>+ * THE COPYRIGHT HOLDERS 
DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,<br>+ * INCLUDING 
ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO<br>+ * 
EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR<br>+
 * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS 
OF USE,<br>+ * DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, 
NEGLIGENCE OR OTHER<br>+ * TORTIOUS ACTION, ARISING OUT OF OR IN 
CONNECTION WITH THE USE OR PERFORMANCE<br>+ * OF THIS SOFTWARE.<br>+ */<br>+#ifndef
 _DRM_DP_MST_HELPER_H_<br>+#define _DRM_DP_MST_HELPER_H_<br>+<br>+#include
 <linux/types.h><br>+#include <drm/drm_dp_helper.h><br>+<br>+struct
 drm_dp_mst_branch;<br>+<br>+/**<br>+ * struct drm_dp_vcpi - Virtual 
Channel Payload Identifer<br>+ * @vcpi: Virtual channel ID.<br>+ * @pbn:
 Payload Bandwidth Number for this channel<br>+ * @aligned_pbn: PBN 
aligned with slot size<br>+ * @num_slots: number of slots for this PBN<br>+
 */<br>+struct drm_dp_vcpi {<br>+   int vcpi;<br>+    int pbn;<br>+     int 
aligned_pbn;<br>+ int num_slots;<br>+};<br>+<br>+/**<br>+ * struct 
drm_dp_mst_port - MST port<br>+ * @kref: reference count for this port.<br>+
 * @guid_valid: for DP 1.2 devices if we have validated the GUID.<br>+ *
 @guid: guid for DP 1.2 device on this port.<br>+ * @port_num: port 
number<br>+ * @input: if this port is an input port.<br>+ * @mcs: 
message capability status - DP 1.2 spec.<br>+ * @ddps: DisplayPort 
Device Plug Status - DP 1.2<br>+ * @pdt: Peer Device Type<br>+ * @ldps: 
Legacy Device Plug Status<br>+ * @dpcd_rev: DPCD revision of device on 
this port<br>+ * @available_pbn: Available bandwidth for this port.<br>+
 * @next: link to next port on this branch device<br>+ * @mstb: branch 
device attach below this port<br>+ * @aux: i2c aux transport to talk to 
device connected to this port.<br>+ * @parent: branch device parent of 
this port<br>+ * @vcpi: Virtual Channel Payload info for this port.<br>+
 * @connector: DRM connector this port is connected to.<br>+ * @mgr: 
topology manager this port lives under.<br>+ *<br>+ * This structure 
represents an MST port endpoint on a device somewhere<br>+ * in the MST 
topology.<br>+ */<br>+struct drm_dp_mst_port {<br>+   struct kref kref;<br>+<br>+
        /* if dpcd 1.2 device is on this port - its GUID info */<br>+     bool 
guid_valid;<br>+  u8 guid[16];<br>+<br>+      u8 port_num;<br>+ bool input;<br>+
        bool mcs;<br>+    bool ddps;<br>+   u8 pdt;<br>+      bool ldps;<br>+   u8 
dpcd_rev;<br>+    uint16_t available_pbn;<br>+      struct list_head next;<br>+       
struct drm_dp_mst_branch *mstb; /* pointer to an mstb if this port has 
one */<br>+       struct drm_dp_aux aux; /* i2c bus for this port? */<br>+  
struct drm_dp_mst_branch *parent;<br>+<br>+ struct drm_dp_vcpi vcpi;<br>+
        struct drm_connector *connector;<br>+     struct drm_dp_mst_topology_mgr 
*mgr;<br>+};<br>+<br>+/**<br>+ * struct drm_dp_mst_branch - MST branch 
device.<br>+ * @kref: reference count for this port.<br>+ * @rad: 
Relative Address to talk to this branch device.<br>+ * @lct: Link count 
total to talk to this branch device.<br>+ * @num_ports: number of ports 
on the branch.<br>+ * @msg_slots: one bit per transmitted msg slot.<br>+
 * @ports: linked list of ports on this branch.<br>+ * @port_parent: 
pointer to the port parent, NULL if toplevel.<br>+ * @mgr: topology 
manager for this branch device.<br>+ * @tx_slots: transmission slots for
 this device.<br>+ * @last_seqno: last sequence number used to talk to 
this.<br>+ * @link_address_sent: if a link address message has been sent
 to this device yet.<br>+ *<br>+ * This structure represents an MST 
branch device, there is one<br>+ * primary branch device at the root, 
along with any others connected<br>+ * to downstream ports<br>+ */<br>+struct
 drm_dp_mst_branch {<br>+ struct kref kref;<br>+    u8 rad[8];<br>+   u8 lct;<br>+
        int num_ports;<br>+<br>+    int msg_slots;<br>+       struct list_head ports;<br>+<br>+
        /* list of tx ops queue for this port */<br>+     struct drm_dp_mst_port 
*port_parent;<br>+        struct drm_dp_mst_topology_mgr *mgr;<br>+<br>+      /* 
slots are protected by mstb->mgr->qlock */<br>+     struct 
drm_dp_sideband_msg_tx *tx_slots[2];<br>+ int last_seqno;<br>+      bool 
link_address_sent;<br>+};<br>+<br>+<br>+/* sideband msg header - not bit
 struct */<br>+struct drm_dp_sideband_msg_hdr {<br>+        u8 lct;<br>+      u8 
lcr;<br>+ u8 rad[8];<br>+   bool broadcast;<br>+      bool path_msg;<br>+       u8 
msg_len;<br>+     bool somt;<br>+   bool eomt;<br>+   bool seqno;<br>+};<br>+<br>+struct
 drm_dp_nak_reply {<br>+  u8 guid[16];<br>+ u8 reason;<br>+   u8 nak_data;<br>+};<br>+<br>+struct
 drm_dp_link_address_ack_reply {<br>+     u8 guid[16];<br>+ u8 nports;<br>+   
struct drm_dp_link_addr_reply_port {<br>+         bool input_port;<br>+             u8 
peer_device_type;<br>+            u8 port_number;<br>+              bool mcs;<br>+            bool ddps;<br>+
                bool legacy_device_plug_status;<br>+              u8 dpcd_revision;<br>+            u8 
peer_guid[16];<br>+               bool num_sdp_streams;<br>+                bool 
num_sdp_stream_sinks;<br>+        } ports[16];<br>+};<br>+<br>+struct 
drm_dp_remote_dpcd_read_ack_reply {<br>+  u8 port_number;<br>+      u8 
num_bytes;<br>+   u8 bytes[255];<br>+};<br>+<br>+struct 
drm_dp_remote_dpcd_write_ack_reply {<br>+ u8 port_number;<br>+};<br>+<br>+struct
 drm_dp_remote_dpcd_write_nak_reply {<br>+        u8 port_number;<br>+      u8 
reason;<br>+      u8 bytes_written_before_failure;<br>+};<br>+<br>+struct 
drm_dp_remote_i2c_read_ack_reply {<br>+   u8 port_number;<br>+      u8 
num_bytes;<br>+   u8 bytes[255];<br>+};<br>+<br>+struct 
drm_dp_remote_i2c_read_nak_reply {<br>+   u8 port_number;<br>+      u8 
nak_reason;<br>+  u8 i2c_nak_transaction;<br>+};<br>+<br>+struct 
drm_dp_remote_i2c_write_ack_reply {<br>+  u8 port_number;<br>+};<br>+<br>+<br>+struct
 drm_dp_sideband_msg_rx {<br>+    u8 chunk[48];<br>+        u8 msg[256];<br>+ u8 
curchunk_len;<br>+        u8 curchunk_idx; /* chunk we are parsing now */<br>+      
u8 curchunk_hdrlen;<br>+  u8 curlen; /* total length of the msg */<br>+     
bool have_somt;<br>+      bool have_eomt;<br>+      struct drm_dp_sideband_msg_hdr
 initial_hdr;<br>+};<br>+<br>+<br>+struct drm_dp_allocate_payload {<br>+
        u8 port_number;<br>+      u8 number_sdp_streams;<br>+       u8 vcpi;<br>+     u16 pbn;<br>+
        u8 sdp_stream_sink[8];<br>+};<br>+<br>+struct 
drm_dp_allocate_payload_ack_reply {<br>+  u8 port_number;<br>+      u8 vcpi;<br>+
        u16 allocated_pbn;<br>+};<br>+<br>+struct 
drm_dp_connection_status_notify {<br>+    u8 guid[16];<br>+ u8 port_number;<br>+
        bool legacy_device_plug_status;<br>+      bool 
displayport_device_plug_status;<br>+      bool message_capability_status;<br>+
        bool input_port;<br>+     u8 peer_device_type;<br>+};<br>+<br>+struct 
drm_dp_remote_dpcd_read {<br>+    u8 port_number;<br>+      u32 dpcd_address;<br>+
        u8 num_bytes;<br>+};<br>+<br>+struct drm_dp_remote_dpcd_write {<br>+    u8
 port_number;<br>+        u32 dpcd_address;<br>+    u8 num_bytes;<br>+        u8 
bytes[255];<br>+};<br>+<br>+struct drm_dp_remote_i2c_read {<br>+        u8 
num_transactions;<br>+    u8 port_number;<br>+      struct {<br>+             u8 
i2c_dev_id;<br>+          u8 num_bytes;<br>+                u8 bytes[255];<br>+               u8 
no_stop_bit;<br>+         u8 i2c_transaction_delay;<br>+    } transactions[4];<br>+
        u8 read_i2c_device_id;<br>+       u8 num_bytes_read;<br>+};<br>+<br>+struct 
drm_dp_remote_i2c_write {<br>+    u8 port_number;<br>+      u8 
write_i2c_device_id;<br>+ u8 num_bytes;<br>+        u8 bytes[255];<br>+};<br>+<br>+/*
 this covers ENUM_RESOURCES, POWER_DOWN_PHY, POWER_UP_PHY */<br>+struct 
drm_dp_port_number_req {<br>+     u8 port_number;<br>+};<br>+<br>+struct 
drm_dp_enum_path_resources_ack_reply {<br>+       u8 port_number;<br>+      u16 
full_payload_bw_number;<br>+      u16 avail_payload_bw_number;<br>+};<br>+<br>+/*
 covers POWER_DOWN_PHY, POWER_UP_PHY */<br>+struct 
drm_dp_port_number_rep {<br>+     u8 port_number;<br>+};<br>+<br>+struct 
drm_dp_query_payload {<br>+       u8 port_number;<br>+      u8 vcpi;<br>+};<br>+<br>+struct
 drm_dp_resource_status_notify {<br>+     u8 port_number;<br>+      u8 guid[16];<br>+
        u16 available_pbn;<br>+};<br>+<br>+struct 
drm_dp_query_payload_ack_reply {<br>+     u8 port_number;<br>+      u8 
allocated_pbn;<br>+};<br>+<br>+struct drm_dp_sideband_msg_req_body {<br>+
        u8 req_type;<br>+ union ack_req {<br>+              struct 
drm_dp_connection_status_notify conn_stat;<br>+           struct 
drm_dp_port_number_req port_num;<br>+             struct 
drm_dp_resource_status_notify resource_stat;<br>+<br>+              struct 
drm_dp_query_payload query_payload;<br>+          struct drm_dp_allocate_payload
 allocate_payload;<br>+<br>+                struct drm_dp_remote_dpcd_read dpcd_read;<br>+
                struct drm_dp_remote_dpcd_write dpcd_write;<br>+<br>+               struct 
drm_dp_remote_i2c_read i2c_read;<br>+             struct drm_dp_remote_i2c_write 
i2c_write;<br>+   } u;<br>+};<br>+<br>+struct 
drm_dp_sideband_msg_reply_body {<br>+     u8 reply_type;<br>+       u8 req_type;<br>+
        union ack_replies {<br>+          struct drm_dp_nak_reply nak;<br>+         struct 
drm_dp_link_address_ack_reply link_addr;<br>+             struct 
drm_dp_port_number_rep port_number;<br>+<br>+               struct 
drm_dp_enum_path_resources_ack_reply path_resources;<br>+         struct 
drm_dp_allocate_payload_ack_reply allocate_payload;<br>+          struct 
drm_dp_query_payload_ack_reply query_payload;<br>+<br>+             struct 
drm_dp_remote_dpcd_read_ack_reply remote_dpcd_read_ack;<br>+              struct 
drm_dp_remote_dpcd_write_ack_reply remote_dpcd_write_ack;<br>+            struct 
drm_dp_remote_dpcd_write_nak_reply remote_dpcd_write_nack;<br>+<br>+                
struct drm_dp_remote_i2c_read_ack_reply remote_i2c_read_ack;<br>+         
struct drm_dp_remote_i2c_read_nak_reply remote_i2c_read_nack;<br>+                
struct drm_dp_remote_i2c_write_ack_reply remote_i2c_write_ack;<br>+       } u;<br>+};<br>+<br>+/*
 msg is queued to be put into a slot */<br>+#define 
DRM_DP_SIDEBAND_TX_QUEUED 0<br>+/* msg has started transmitting on a 
slot - still on msgq */<br>+#define DRM_DP_SIDEBAND_TX_START_SEND 1<br>+/*
 msg has finished transmitting on a slot - removed from msgq only in 
slot */<br>+#define DRM_DP_SIDEBAND_TX_SENT 2<br>+/* msg has received a 
response - removed from slot */<br>+#define DRM_DP_SIDEBAND_TX_RX 3<br>+#define
 DRM_DP_SIDEBAND_TX_TIMEOUT 4<br>+<br>+struct drm_dp_sideband_msg_tx {<br>+
        u8 msg[256];<br>+ u8 chunk[48];<br>+        u8 cur_offset;<br>+       u8 cur_len;<br>+
        struct drm_dp_mst_branch *dst;<br>+       struct list_head next;<br>+       int 
seqno;<br>+       int state;<br>+   bool path_msg;<br>+       struct 
drm_dp_sideband_msg_reply_body reply;<br>+};<br>+<br>+/* sideband msg 
handler */<br>+struct drm_dp_mst_topology_mgr;<br>+struct 
drm_dp_mst_topology_cbs {<br>+    /* create a connector for a port */<br>+  
struct drm_connector *(*add_connector)(struct drm_dp_mst_topology_mgr 
*mgr, struct drm_dp_mst_port *port, char *path);<br>+     void 
(*destroy_connector)(struct drm_dp_mst_topology_mgr *mgr,<br>+                              
struct drm_connector *connector);<br>+    void (*hotplug)(struct 
drm_dp_mst_topology_mgr *mgr);<br>+<br>+};<br>+<br>+#define 
DP_MAX_PAYLOAD (sizeof(unsigned long) * 8)<br>+<br>+#define 
DP_PAYLOAD_LOCAL 1<br>+#define DP_PAYLOAD_REMOTE 2<br>+#define 
DP_PAYLOAD_DELETE_LOCAL 3<br>+<br>+struct drm_dp_payload {<br>+       int 
payload_state;<br>+       int start_slot;<br>+      int num_slots;<br>+};<br>+<br>+/**<br>+
 * struct drm_dp_mst_topology_mgr - DisplayPort MST manager<br>+ * @dev:
 device pointer for adding i2c devices etc.<br>+ * @cbs: callbacks for 
connector addition and destruction.<br>+ * @max_dpcd_transaction_bytes -
 maximum number of bytes to read/write in one go.<br>+ * @aux: aux 
channel for the DP connector.<br>+ * @max_payloads: maximum number of 
payloads the GPU can generate.<br>+ * @conn_base_id: DRM connector ID 
this mgr is connected to.<br>+ * @down_rep_recv: msg receiver state for 
down replies.<br>+ * @up_req_recv: msg receiver state for up requests.<br>+
 * @lock: protects mst state, primary, guid, dpcd.<br>+ * @aux_lock: 
protects aux channel.<br>+ * @mst_state: if this manager is enabled for 
an MST capable port.<br>+ * @mst_primary: pointer to the primary branch 
device.<br>+ * @guid_valid: GUID valid for the primary branch device.<br>+
 * @guid: GUID for primary port.<br>+ * @dpcd: cache of DPCD for primary
 port.<br>+ * @pbn_div: PBN to slots divisor.<br>+ *<br>+ * This struct 
represents the toplevel displayport MST topology manager.<br>+ * There 
should be one instance of this for every MST capable DP connector<br>+ *
 on the GPU.<br>+ */<br>+struct drm_dp_mst_topology_mgr {<br>+<br>+     
struct device *dev;<br>+  struct drm_dp_mst_topology_cbs *cbs;<br>+ int 
max_dpcd_transaction_bytes;<br>+  struct drm_dp_aux *aux; /* auxch for 
this topology mgr to use */<br>+  int max_payloads;<br>+    int 
conn_base_id;<br>+<br>+     /* only ever accessed from the workqueue - which
 should be serialised */<br>+     struct drm_dp_sideband_msg_rx 
down_rep_recv;<br>+       struct drm_dp_sideband_msg_rx up_req_recv;<br>+<br>+
        /* pointer to info about the initial MST device */<br>+   struct mutex 
lock; /* protects mst_state + primary + guid + dpcd */<br>+<br>+    struct 
mutex aux_lock; /* protect access to the AUX */<br>+      bool mst_state;<br>+
        struct drm_dp_mst_branch *mst_primary;<br>+       /* primary MST device GUID 
*/<br>+   bool guid_valid;<br>+     u8 guid[16];<br>+ u8 
dpcd[DP_RECEIVER_CAP_SIZE];<br>+  u8 sink_count;<br>+       int pbn_div;<br>+ 
int total_slots;<br>+     int avail_slots;<br>+     int total_pbn;<br>+<br>+    /* 
messages to be transmitted */<br>+        /* qlock protects the upq/downq and 
in_progress,<br>+    the mstb tx_slots and txmsg->state once they are
 queued */<br>+   struct mutex qlock;<br>+  struct list_head tx_msg_downq;<br>+
        struct list_head tx_msg_upq;<br>+ bool tx_down_in_progress;<br>+    bool 
tx_up_in_progress;<br>+<br>+        /* payload info + lock for it */<br>+     
struct mutex payload_lock;<br>+   struct drm_dp_vcpi **proposed_vcpis;<br>+
        struct drm_dp_payload *payloads;<br>+     unsigned long payload_mask;<br>+<br>+
        wait_queue_head_t tx_waitq;<br>+  struct work_struct work;<br>+<br>+  
struct work_struct tx_work;<br>+};<br>+<br>+int 
drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr, struct
 device *dev, struct drm_dp_aux *aux, int max_dpcd_transaction_bytes, 
int max_payloads, int conn_base_id);<br>+<br>+void 
drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr);<br>+<br>+<br>+int
 drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, 
bool mst_state);<br>+<br>+<br>+int drm_dp_mst_hpd_irq(struct 
drm_dp_mst_topology_mgr *mgr, u8 *esi, bool *handled);<br>+<br>+<br>+enum
 drm_connector_status drm_dp_mst_detect_port(struct 
drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);<br>+<br>+struct
 edid *drm_dp_mst_get_edid(struct drm_connector *connector, struct 
drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);<br>+<br>+<br>+int
 drm_dp_calc_pbn_mode(int clock, int bpp);<br>+<br>+<br>+bool 
drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, struct 
drm_dp_mst_port *port, int pbn, int *slots);<br>+<br>+<br>+void 
drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct 
drm_dp_mst_port *port);<br>+<br>+<br>+void 
drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,<br>+                              
struct drm_dp_mst_port *port);<br>+<br>+<br>+int 
drm_dp_find_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr,<br>+                     
int pbn);<br>+<br>+<br>+int drm_dp_update_payload_part1(struct 
drm_dp_mst_topology_mgr *mgr);<br>+<br>+<br>+int 
drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr);<br>+<br>+int
 drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr);<br>+<br>+void
 drm_dp_mst_dump_topology(struct seq_file *m,<br>+                              struct 
drm_dp_mst_topology_mgr *mgr);<br>+<br>+void 
drm_dp_mst_topology_mgr_suspend(struct drm_dp_mst_topology_mgr *mgr);<br>+int
 drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr);<br>+#endif<br></div></div>
  <div style="margin:30px 25px 10px 25px;" class="__pbConvHr"><div 
style="display:table;width:100%;border-top:1px solid 
#EDEEF0;padding-top:5px">       <div 
style="display:table-cell;vertical-align:middle;padding-right:6px;"><img
 photoaddress="airlied@gmail.com" photoname="Dave Airlie" 
src="cid:part1.00040602.01010205@gmail.com" name="postbox-contact.jpg" 
height="25px" width="25px"></div>   <div 
style="display:table-cell;white-space:nowrap;vertical-align:middle;width:100%">
        <a moz-do-not-send="true" href="mailto:airlied@gmail.com" 
style="color:#737F92 
!important;padding-right:6px;font-weight:bold;text-decoration:none 
!important;">Dave Airlie</a></div>   <div 
style="display:table-cell;white-space:nowrap;vertical-align:middle;">   
  <font color="#9FA2A5"><span style="padding-left:6px">Tuesday, May 20, 
2014 7:54 PM</span></font></div></div></div>
  <div style="color:#888888;margin-left:24px;margin-right:24px;" 
__pbrmquotes="true" class="__pbConvBody"><div>Hey,<br><br>So this set is
 pretty close to what I think we should be merging initially,<br><br>Since
 the last set, it makes fbcon and suspend/resume work a lot better,<br><br>I've
 also fixed a couple of bugs in -intel that make things work a lot<br>better.<br><br>I've
 bashed on this a bit using kms-flip from intel-gpu-tools, hacked<br>to 
add 3 monitor support.<br><br>It still generates a fair few i915 state 
checker backtraces, and some<br>of them are fairly hard to work out, it 
might be we should just tone<br>down the state checker for 
encoders/connectors with no actual hw backing<br>them.<br><br>Dave.<br><br>_______________________________________________<br>Intel-gfx
 mailing list<br><a class="moz-txt-link-abbreviated" href="mailto:Intel-gfx@lists.freedesktop.org">Intel-gfx@lists.freedesktop.org</a><br><a class="moz-txt-link-freetext" href="http://lists.freedesktop.org/mailman/listinfo/intel-gfx">http://lists.freedesktop.org/mailman/listinfo/intel-gfx</a><br></div></div>
</blockquote>
<br>
<div class="moz-signature">-- <br>
<div><font size="-1">Sent using Postbox:<br><a 
href="http://www.getpostbox.com"><span style="color: rgb(51, 102, 153);">http://www.getpostbox.com</span></a></font></div></div>
</body></html>