This will be uploaded soon. ]]>

Since the semester has ended and some of us won't be going to the University for some time now, could you maybe upload the grades sheet here for the HWs?

Thank you!

]]>The graded HW2 submissions can be found in mail box #263 (Daniel Carmon) at Schreiber.

Those who sent me online have a printed version of their HW there.

Also, several students who submitted the theoretical questions didn't submit the programming assignment (or at least the automatic tester didn't receive them).

Please check your graded work to see if I wrote you that your programming assignment wasn't received, and if so contact me at:

li.ca.uat.liam|adnomrac#li.ca.uat.liam|adnomrac

Thanks,

Daniel.

Using a Markov model of higher order will (probably) give better results, but this is not the model we are asked to implement.

If you also take NLP, think about a first-order Markov model for sentence completion. This model suggests that the next work only depends on the current word. This will clearly give inaccurate results in most cases, but this is the definition of the model. ]]>

Some students claimed that when completing the unobserved pixels in the picture we need to consider only the frame pixels surrounding them, because it's a Markov Network and therefore the rest of the observed pixels are separated by that frame.

I don't understand why this is correct. according to that claim the same frame from two different pictures will give us the same completion - isn't that wrong? if not - why?

and if we do take in account some more far away pixels outside of the frame, won't we get more accurate and correct results, since we take in account more information about the picture?

]]>Maybe i misunderstood something about the log. when I do log(phi(x_i,x_j)), in most of the cases i get negative values - which is wrong since the message have to be positive, right? otherwise we might get negative values for the pixels.

The only times where we don't get negative values is when the difference between the pixel values is really small.

Does that make sense? how?

]]>This is equivalent to eq2 if we say that at t=0 all messages are initialized to the 1 vector. ]]>

When working with logs, the message updates etc are additive, so zero values shouldn't be a problem. ]]>

As the question states: "The question refers to sum-proudct LBP", so the messages are defined with summing.

]]>1. "We understood from the question that we need to compute the out going messages for each unobserved nodes, then use those messages to update in coming messages for unobserved neighbor nodes." - The observed nodes also send messages that should (must) be computed.

2. "How do we know what pixel maximizes the incoming messages?" - I think that maybe your'e mixing between xi and Xi. Xi is a node which corresponds to a pixel, but xi is denoting a pixel value (between 0 and 255), so the argmax is straightforward to find.

3. "When computing outgoing nodes m_ij(xj), should we compute it for every possible value of xj?" - Yes. This should tell Xj what his neighbor Xi "thinks" of each possible assignment it (Xj) can have.

Is it over the values of a specific \ any neighbor j?

Is it over the values of all the neigbours of i? (is so, there is a missing outer sum over j\in N(i))

We get much better results without it, and it causes some problems since there are cases of zero values in the message.

]]>Thanks ]]>

I understand from the instruction that we need to go over all unobserved nodes and update all the out-going messages from them to their observed neighbors.

All the in-going messages to the unobserved nodes are simply the pairwise function where the "from" node is an observed pixel and the "to" node is the optional value of the unobserved node.

However this way the in-going messages are never updated, and the approximate assignment is dependent only on the in-going messages of the unobserved nodes.

How are the in-going messages updated? How do we use the updated out-going messages to approximate unobserved pixels?

Thanks ]]>