Finding rational polynomial functions that satisfy a certain equation to help solve wealthy gambler race...
$begingroup$
Here is my core question: I know there is some rational polynomial function, $p(s)$ that satisfies the following relationship:
$$frac{(2s+4)(2s+3)(2s+5)}{8 (s+1)(s+3)^2}p(s+1) - p(s) = frac{3}{s+3} tag{1}$$
I happen to know that the answer in this instance is:
$$p(s) = bbox[grey]{4frac{(s+2)(5s+8)}{(s+1)}}$$
I know how to verify this answer, but not how to find it from "scratch". Is there a systematic way to do this?
Now, why do I care about equation (1)? Because it helps in solving the summation arising in the answer to the following question:
"Imagine two wealthy gamblers start tossing their own separate fair coins, winning 1$ on heads and losing 1$ on tails. Both start at 0$ and have infinite bank balances. The first one wants to get to 2$ and the second one wants to get to 3$. What is the probability that the first one will reach his goal before the second one?"
You can get more details about this problem by going through the following question (and the two of my answers): Race of the wealthy gamblers: How do I get this closed form?
To get the probability that a gambler targeting 1$ will draw with another gambler targeting 1$, you need the following identity:
$$bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2}frac{1}{(s+1)^2} = 4(4s+5)bbox[orange]{left(frac{{2s+2 choose s+1}}{2^{2s+2}}right)^2} - 4(4s+1) bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2} tag{2}$$
If you knew this identity, finding the closed form expression for the summation (which is the probability two 1$ targeting gamblers will draw):
$$sumlimits_{t=0}^s bbox[yellow]{left(frac{{2t choose t}}{2^{2t}}right)^2}frac{1}{(t+1)^2}$$ would be trivial as per this approach courtesy @robojohn.
Note how the two yellow bits are the same and the orange bit arises by replacing $s$ with $s+1$ in the yellow bit. Also $p(s)$ for this expression would be $p(s) = 4(4s+1)$ which accompanies the yellow bit on the RHS and then $p(s+1)$ accompanies the orange bit. If you factor out the common terms, you'll end up with an equation like (1).
This seems to be a general pattern for these kinds of summations. Here are more examples for the skeptics:
For solving the summation for 1$ targeting gambler beating a 2$ targeting one:
$$bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2}frac{1}{(s+1)} = 4(s+1)bbox[orange]{left(frac{{2s+2 choose s+1}}{2^{2s+2}}right)^2} - 4s bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2} tag{3}$$
And here are some more (all relating to gambler races see here for details):
$$bbox[yellow]{left(frac{{2s+1 choose s}}{2^{2s+1}}right)^2}frac{1}{(s+2)} = 4(s+2)bbox[orange]{left(frac{{2s+3 choose s+1}}{2^{2s+3}}right)^2} - 4(s+1) bbox[yellow]{left(frac{{2s+1 choose s}}{2^{2s+1}}right)^2} tag{3}$$
$$bbox[yellow]{left(frac{{2s+2 choose s}}{2^{2s+2}} frac{{2s+3 choose s+1}}{2^{2s+3}} right)}frac{3}{(s+3)} = frac{4(s+3)(5s+13)}{(s+2)}bbox[orange]{left(frac{{2s+4 choose s+1}}{2^{2s+4}} frac{{2s+5 choose s+2}}{2^{2s+5}} right)} - bbox[grey]{frac{4(s+2)(5s+8)}{(s+1)}} bbox[yellow]{left(frac{{2s+2 choose s}}{2^{2s+2}} frac{{2s+3 choose s+1}}{2^{2s+3}} right)} tag{4}$$
$$bbox[yellow]{left(frac{{2s choose s}}{2^{2s}} frac{{2s+2 choose s}}{2^{2s+2}} right)}frac{3s+4}{2(s+1)^2} = frac{-4(s+3)(s+4)}{6(s+2)}bbox[orange]{left(frac{{2s+2 choose s+1}}{2^{2s+2}} frac{{2s+4 choose s+1}}{2^{2s+4}} right)} - frac{-4(s+2)(s+3)}{6(s+1)} bbox[yellow]{left(frac{{2s choose s}}{2^{2s}} frac{{2s+2 choose s}}{2^{2s+2}} right)} tag{5}$$
I hope you're convinced of the pattern by now. Assuming I didn't know the grey bit in equation (4) and assumed it was $p(s)$. Then cancelling out terms in equation (4) would lead to equation (1). The $p(s)$ mentioned there is probably the only rational solution of equation (1). Question is, how do we find it?
probability probability-theory summation markov-chains
$endgroup$
add a comment |
$begingroup$
Here is my core question: I know there is some rational polynomial function, $p(s)$ that satisfies the following relationship:
$$frac{(2s+4)(2s+3)(2s+5)}{8 (s+1)(s+3)^2}p(s+1) - p(s) = frac{3}{s+3} tag{1}$$
I happen to know that the answer in this instance is:
$$p(s) = bbox[grey]{4frac{(s+2)(5s+8)}{(s+1)}}$$
I know how to verify this answer, but not how to find it from "scratch". Is there a systematic way to do this?
Now, why do I care about equation (1)? Because it helps in solving the summation arising in the answer to the following question:
"Imagine two wealthy gamblers start tossing their own separate fair coins, winning 1$ on heads and losing 1$ on tails. Both start at 0$ and have infinite bank balances. The first one wants to get to 2$ and the second one wants to get to 3$. What is the probability that the first one will reach his goal before the second one?"
You can get more details about this problem by going through the following question (and the two of my answers): Race of the wealthy gamblers: How do I get this closed form?
To get the probability that a gambler targeting 1$ will draw with another gambler targeting 1$, you need the following identity:
$$bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2}frac{1}{(s+1)^2} = 4(4s+5)bbox[orange]{left(frac{{2s+2 choose s+1}}{2^{2s+2}}right)^2} - 4(4s+1) bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2} tag{2}$$
If you knew this identity, finding the closed form expression for the summation (which is the probability two 1$ targeting gamblers will draw):
$$sumlimits_{t=0}^s bbox[yellow]{left(frac{{2t choose t}}{2^{2t}}right)^2}frac{1}{(t+1)^2}$$ would be trivial as per this approach courtesy @robojohn.
Note how the two yellow bits are the same and the orange bit arises by replacing $s$ with $s+1$ in the yellow bit. Also $p(s)$ for this expression would be $p(s) = 4(4s+1)$ which accompanies the yellow bit on the RHS and then $p(s+1)$ accompanies the orange bit. If you factor out the common terms, you'll end up with an equation like (1).
This seems to be a general pattern for these kinds of summations. Here are more examples for the skeptics:
For solving the summation for 1$ targeting gambler beating a 2$ targeting one:
$$bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2}frac{1}{(s+1)} = 4(s+1)bbox[orange]{left(frac{{2s+2 choose s+1}}{2^{2s+2}}right)^2} - 4s bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2} tag{3}$$
And here are some more (all relating to gambler races see here for details):
$$bbox[yellow]{left(frac{{2s+1 choose s}}{2^{2s+1}}right)^2}frac{1}{(s+2)} = 4(s+2)bbox[orange]{left(frac{{2s+3 choose s+1}}{2^{2s+3}}right)^2} - 4(s+1) bbox[yellow]{left(frac{{2s+1 choose s}}{2^{2s+1}}right)^2} tag{3}$$
$$bbox[yellow]{left(frac{{2s+2 choose s}}{2^{2s+2}} frac{{2s+3 choose s+1}}{2^{2s+3}} right)}frac{3}{(s+3)} = frac{4(s+3)(5s+13)}{(s+2)}bbox[orange]{left(frac{{2s+4 choose s+1}}{2^{2s+4}} frac{{2s+5 choose s+2}}{2^{2s+5}} right)} - bbox[grey]{frac{4(s+2)(5s+8)}{(s+1)}} bbox[yellow]{left(frac{{2s+2 choose s}}{2^{2s+2}} frac{{2s+3 choose s+1}}{2^{2s+3}} right)} tag{4}$$
$$bbox[yellow]{left(frac{{2s choose s}}{2^{2s}} frac{{2s+2 choose s}}{2^{2s+2}} right)}frac{3s+4}{2(s+1)^2} = frac{-4(s+3)(s+4)}{6(s+2)}bbox[orange]{left(frac{{2s+2 choose s+1}}{2^{2s+2}} frac{{2s+4 choose s+1}}{2^{2s+4}} right)} - frac{-4(s+2)(s+3)}{6(s+1)} bbox[yellow]{left(frac{{2s choose s}}{2^{2s}} frac{{2s+2 choose s}}{2^{2s+2}} right)} tag{5}$$
I hope you're convinced of the pattern by now. Assuming I didn't know the grey bit in equation (4) and assumed it was $p(s)$. Then cancelling out terms in equation (4) would lead to equation (1). The $p(s)$ mentioned there is probably the only rational solution of equation (1). Question is, how do we find it?
probability probability-theory summation markov-chains
$endgroup$
add a comment |
$begingroup$
Here is my core question: I know there is some rational polynomial function, $p(s)$ that satisfies the following relationship:
$$frac{(2s+4)(2s+3)(2s+5)}{8 (s+1)(s+3)^2}p(s+1) - p(s) = frac{3}{s+3} tag{1}$$
I happen to know that the answer in this instance is:
$$p(s) = bbox[grey]{4frac{(s+2)(5s+8)}{(s+1)}}$$
I know how to verify this answer, but not how to find it from "scratch". Is there a systematic way to do this?
Now, why do I care about equation (1)? Because it helps in solving the summation arising in the answer to the following question:
"Imagine two wealthy gamblers start tossing their own separate fair coins, winning 1$ on heads and losing 1$ on tails. Both start at 0$ and have infinite bank balances. The first one wants to get to 2$ and the second one wants to get to 3$. What is the probability that the first one will reach his goal before the second one?"
You can get more details about this problem by going through the following question (and the two of my answers): Race of the wealthy gamblers: How do I get this closed form?
To get the probability that a gambler targeting 1$ will draw with another gambler targeting 1$, you need the following identity:
$$bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2}frac{1}{(s+1)^2} = 4(4s+5)bbox[orange]{left(frac{{2s+2 choose s+1}}{2^{2s+2}}right)^2} - 4(4s+1) bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2} tag{2}$$
If you knew this identity, finding the closed form expression for the summation (which is the probability two 1$ targeting gamblers will draw):
$$sumlimits_{t=0}^s bbox[yellow]{left(frac{{2t choose t}}{2^{2t}}right)^2}frac{1}{(t+1)^2}$$ would be trivial as per this approach courtesy @robojohn.
Note how the two yellow bits are the same and the orange bit arises by replacing $s$ with $s+1$ in the yellow bit. Also $p(s)$ for this expression would be $p(s) = 4(4s+1)$ which accompanies the yellow bit on the RHS and then $p(s+1)$ accompanies the orange bit. If you factor out the common terms, you'll end up with an equation like (1).
This seems to be a general pattern for these kinds of summations. Here are more examples for the skeptics:
For solving the summation for 1$ targeting gambler beating a 2$ targeting one:
$$bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2}frac{1}{(s+1)} = 4(s+1)bbox[orange]{left(frac{{2s+2 choose s+1}}{2^{2s+2}}right)^2} - 4s bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2} tag{3}$$
And here are some more (all relating to gambler races see here for details):
$$bbox[yellow]{left(frac{{2s+1 choose s}}{2^{2s+1}}right)^2}frac{1}{(s+2)} = 4(s+2)bbox[orange]{left(frac{{2s+3 choose s+1}}{2^{2s+3}}right)^2} - 4(s+1) bbox[yellow]{left(frac{{2s+1 choose s}}{2^{2s+1}}right)^2} tag{3}$$
$$bbox[yellow]{left(frac{{2s+2 choose s}}{2^{2s+2}} frac{{2s+3 choose s+1}}{2^{2s+3}} right)}frac{3}{(s+3)} = frac{4(s+3)(5s+13)}{(s+2)}bbox[orange]{left(frac{{2s+4 choose s+1}}{2^{2s+4}} frac{{2s+5 choose s+2}}{2^{2s+5}} right)} - bbox[grey]{frac{4(s+2)(5s+8)}{(s+1)}} bbox[yellow]{left(frac{{2s+2 choose s}}{2^{2s+2}} frac{{2s+3 choose s+1}}{2^{2s+3}} right)} tag{4}$$
$$bbox[yellow]{left(frac{{2s choose s}}{2^{2s}} frac{{2s+2 choose s}}{2^{2s+2}} right)}frac{3s+4}{2(s+1)^2} = frac{-4(s+3)(s+4)}{6(s+2)}bbox[orange]{left(frac{{2s+2 choose s+1}}{2^{2s+2}} frac{{2s+4 choose s+1}}{2^{2s+4}} right)} - frac{-4(s+2)(s+3)}{6(s+1)} bbox[yellow]{left(frac{{2s choose s}}{2^{2s}} frac{{2s+2 choose s}}{2^{2s+2}} right)} tag{5}$$
I hope you're convinced of the pattern by now. Assuming I didn't know the grey bit in equation (4) and assumed it was $p(s)$. Then cancelling out terms in equation (4) would lead to equation (1). The $p(s)$ mentioned there is probably the only rational solution of equation (1). Question is, how do we find it?
probability probability-theory summation markov-chains
$endgroup$
Here is my core question: I know there is some rational polynomial function, $p(s)$ that satisfies the following relationship:
$$frac{(2s+4)(2s+3)(2s+5)}{8 (s+1)(s+3)^2}p(s+1) - p(s) = frac{3}{s+3} tag{1}$$
I happen to know that the answer in this instance is:
$$p(s) = bbox[grey]{4frac{(s+2)(5s+8)}{(s+1)}}$$
I know how to verify this answer, but not how to find it from "scratch". Is there a systematic way to do this?
Now, why do I care about equation (1)? Because it helps in solving the summation arising in the answer to the following question:
"Imagine two wealthy gamblers start tossing their own separate fair coins, winning 1$ on heads and losing 1$ on tails. Both start at 0$ and have infinite bank balances. The first one wants to get to 2$ and the second one wants to get to 3$. What is the probability that the first one will reach his goal before the second one?"
You can get more details about this problem by going through the following question (and the two of my answers): Race of the wealthy gamblers: How do I get this closed form?
To get the probability that a gambler targeting 1$ will draw with another gambler targeting 1$, you need the following identity:
$$bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2}frac{1}{(s+1)^2} = 4(4s+5)bbox[orange]{left(frac{{2s+2 choose s+1}}{2^{2s+2}}right)^2} - 4(4s+1) bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2} tag{2}$$
If you knew this identity, finding the closed form expression for the summation (which is the probability two 1$ targeting gamblers will draw):
$$sumlimits_{t=0}^s bbox[yellow]{left(frac{{2t choose t}}{2^{2t}}right)^2}frac{1}{(t+1)^2}$$ would be trivial as per this approach courtesy @robojohn.
Note how the two yellow bits are the same and the orange bit arises by replacing $s$ with $s+1$ in the yellow bit. Also $p(s)$ for this expression would be $p(s) = 4(4s+1)$ which accompanies the yellow bit on the RHS and then $p(s+1)$ accompanies the orange bit. If you factor out the common terms, you'll end up with an equation like (1).
This seems to be a general pattern for these kinds of summations. Here are more examples for the skeptics:
For solving the summation for 1$ targeting gambler beating a 2$ targeting one:
$$bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2}frac{1}{(s+1)} = 4(s+1)bbox[orange]{left(frac{{2s+2 choose s+1}}{2^{2s+2}}right)^2} - 4s bbox[yellow]{left(frac{{2s choose s}}{2^{2s}}right)^2} tag{3}$$
And here are some more (all relating to gambler races see here for details):
$$bbox[yellow]{left(frac{{2s+1 choose s}}{2^{2s+1}}right)^2}frac{1}{(s+2)} = 4(s+2)bbox[orange]{left(frac{{2s+3 choose s+1}}{2^{2s+3}}right)^2} - 4(s+1) bbox[yellow]{left(frac{{2s+1 choose s}}{2^{2s+1}}right)^2} tag{3}$$
$$bbox[yellow]{left(frac{{2s+2 choose s}}{2^{2s+2}} frac{{2s+3 choose s+1}}{2^{2s+3}} right)}frac{3}{(s+3)} = frac{4(s+3)(5s+13)}{(s+2)}bbox[orange]{left(frac{{2s+4 choose s+1}}{2^{2s+4}} frac{{2s+5 choose s+2}}{2^{2s+5}} right)} - bbox[grey]{frac{4(s+2)(5s+8)}{(s+1)}} bbox[yellow]{left(frac{{2s+2 choose s}}{2^{2s+2}} frac{{2s+3 choose s+1}}{2^{2s+3}} right)} tag{4}$$
$$bbox[yellow]{left(frac{{2s choose s}}{2^{2s}} frac{{2s+2 choose s}}{2^{2s+2}} right)}frac{3s+4}{2(s+1)^2} = frac{-4(s+3)(s+4)}{6(s+2)}bbox[orange]{left(frac{{2s+2 choose s+1}}{2^{2s+2}} frac{{2s+4 choose s+1}}{2^{2s+4}} right)} - frac{-4(s+2)(s+3)}{6(s+1)} bbox[yellow]{left(frac{{2s choose s}}{2^{2s}} frac{{2s+2 choose s}}{2^{2s+2}} right)} tag{5}$$
I hope you're convinced of the pattern by now. Assuming I didn't know the grey bit in equation (4) and assumed it was $p(s)$. Then cancelling out terms in equation (4) would lead to equation (1). The $p(s)$ mentioned there is probably the only rational solution of equation (1). Question is, how do we find it?
probability probability-theory summation markov-chains
probability probability-theory summation markov-chains
asked Jan 9 at 4:42
Rohit PandeyRohit Pandey
1,2011021
1,2011021
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
We can re-express as
begin{align*}
(2s+3)(2s+4)(2s+5)p(s+1) = 8(s+1)(s+3)^2 p(s) + 24(s+1)(s+3)
end{align*}
The generating function for this is
begin{align*}
(2L+3)(2L+4)(2L+5)frac{f(x)-p(0)}{x} = 8(L+1)(L+3)^2f(x) + 24(L+1)(L+3)frac{1}{1-x}
end{align*}
where $L = xfrac{d}{dx}$ is an operator which first takes the derivative w.r.t $x$, and then multiplies by $x$. This gives us a differential equation for $f$. Solving this, along with the initial condition $p(0) = 64$, we have
begin{align*}
f(x) = frac{4(-8x^2 - 3x^2log(1-x)+13x + 6xlog(1-x) - 3log(1-x))}{(1-x)^2x}
end{align*}
Through $log(1-x) = x + frac{x^2}{2} + frac{x^3}{3} + cdots$ and partial fractions, you will arrive that the $s$th coefficient of the Taylor series expansion at $x = 0$ is indeed $4(s+2)(5s+8)/(s+1)$.
$endgroup$
$begingroup$
Thanks! How did you solve the differential equation?
$endgroup$
– Rohit Pandey
1 hour ago
add a comment |
$begingroup$
HINT:
We are given a first degree Finite Difference equation, linear and non-homogeneous
$$
{{a(s)} over {b(s)}}p(s + 1) + {{c(s)} over {d(s)}}p(s ) = {{f(s)} over {g(s)}}
$$
where all the parameters $a(s), cdots , f(s)$ are polynomials in $s$.
The above equation shall be valid for all real values of $s$, in particular for those for which
$a(s) ne 0$:
let's assume $s$ belongs to such a domain.
We normalize to $1$ the coefficient of $p(s+1)$
$$
p(s + 1) + {{b(s)c(s)} over {a(s)d(s)}}p(s ) = {{b(s)f(s)} over {a(s)g(s)}}
$$
Thereafter, assume also that $s$ (and $s+1$) belongs to a domain in which $b(s)c(s) ne 0$.
So we can divide by the summing factor (analog to the integration factor)
$$
h(s) = left( { - 1} right)^{,s} prodlimits_{k, = ,u}^{s - 1} {{{b(k)c(k)} over {a(k)d(k)}}}
$$
where $k$ ranges in the said domain, to obtain an exact difference
$$
eqalign{
& {{p(s + 1)} over {h(s + 1)}} + {{b(s)c(s)} over {a(s)d(s)}}{{p(s)} over {h(s + 1)}} = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& {{p(s + 1)} over {h(s + 1)}} - {{p(s)} over {h(s)}} = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& Delta left( {{{p(s)} over {h(s)}}} right) = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& {{p(s)} over {h(s)}} = sumlimits_{k = ,v}^{s - 1} {{{b(k)f(k)} over {a(k)g(k)h(k + 1)}}} + C cr}
$$
Then we shall verify the domain of validity of the solution
standing the assumptions made.
$endgroup$
add a comment |
$begingroup$
This only a partial answer. But it's a general strategy to solve things like this.
Note that if $p(s)$ satisfies your identity for all $s ge 0$, then it must also work for all $s in mathbb N$. Then we have a recursion which we can solve using either Mathematica or Maple. For example, in Mathematica you can use
Solve[-p[s] + ((3 + 2 s) (4 + 2 s) (5 + 2 s) p[1 + s])/(8 (1 + s) (3 + s)^2) == 3/(3 + s),p[s],s]
The result is
$$
-frac{3 (2 s+3) (2 s+5) , _3F_2left(2,s+frac{5}{2},s+frac{7}{2};s+4,s+5;1right)}{4 (s+1) (s+3)^2 (s+4)}+frac{(s+2) left(3 pi c_1+512 s-192 pi +1152right) Gamma (s+1) Gamma (s+3)}{32 Gamma left(s+frac{3}{2}right) Gamma left(s+frac{5}{2}right)}-4 (s+2) (4 s+7),
$$
where $c_1$ is constant decided by the value of $p(0)$.
I say this is a partial answer because so far I have not been able to simplify this to
$$
4frac{(s+2)(5s+8)}{(s+1)}.
$$
But this should be true by taking $c_1=64$. I just do not know enough hypergeometric function to prove it.
$endgroup$
$begingroup$
Thanks, interesting direction. Will see if I can pursue it further. +1
$endgroup$
– Rohit Pandey
Jan 15 at 2:21
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3067081%2ffinding-rational-polynomial-functions-that-satisfy-a-certain-equation-to-help-so%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
We can re-express as
begin{align*}
(2s+3)(2s+4)(2s+5)p(s+1) = 8(s+1)(s+3)^2 p(s) + 24(s+1)(s+3)
end{align*}
The generating function for this is
begin{align*}
(2L+3)(2L+4)(2L+5)frac{f(x)-p(0)}{x} = 8(L+1)(L+3)^2f(x) + 24(L+1)(L+3)frac{1}{1-x}
end{align*}
where $L = xfrac{d}{dx}$ is an operator which first takes the derivative w.r.t $x$, and then multiplies by $x$. This gives us a differential equation for $f$. Solving this, along with the initial condition $p(0) = 64$, we have
begin{align*}
f(x) = frac{4(-8x^2 - 3x^2log(1-x)+13x + 6xlog(1-x) - 3log(1-x))}{(1-x)^2x}
end{align*}
Through $log(1-x) = x + frac{x^2}{2} + frac{x^3}{3} + cdots$ and partial fractions, you will arrive that the $s$th coefficient of the Taylor series expansion at $x = 0$ is indeed $4(s+2)(5s+8)/(s+1)$.
$endgroup$
$begingroup$
Thanks! How did you solve the differential equation?
$endgroup$
– Rohit Pandey
1 hour ago
add a comment |
$begingroup$
We can re-express as
begin{align*}
(2s+3)(2s+4)(2s+5)p(s+1) = 8(s+1)(s+3)^2 p(s) + 24(s+1)(s+3)
end{align*}
The generating function for this is
begin{align*}
(2L+3)(2L+4)(2L+5)frac{f(x)-p(0)}{x} = 8(L+1)(L+3)^2f(x) + 24(L+1)(L+3)frac{1}{1-x}
end{align*}
where $L = xfrac{d}{dx}$ is an operator which first takes the derivative w.r.t $x$, and then multiplies by $x$. This gives us a differential equation for $f$. Solving this, along with the initial condition $p(0) = 64$, we have
begin{align*}
f(x) = frac{4(-8x^2 - 3x^2log(1-x)+13x + 6xlog(1-x) - 3log(1-x))}{(1-x)^2x}
end{align*}
Through $log(1-x) = x + frac{x^2}{2} + frac{x^3}{3} + cdots$ and partial fractions, you will arrive that the $s$th coefficient of the Taylor series expansion at $x = 0$ is indeed $4(s+2)(5s+8)/(s+1)$.
$endgroup$
$begingroup$
Thanks! How did you solve the differential equation?
$endgroup$
– Rohit Pandey
1 hour ago
add a comment |
$begingroup$
We can re-express as
begin{align*}
(2s+3)(2s+4)(2s+5)p(s+1) = 8(s+1)(s+3)^2 p(s) + 24(s+1)(s+3)
end{align*}
The generating function for this is
begin{align*}
(2L+3)(2L+4)(2L+5)frac{f(x)-p(0)}{x} = 8(L+1)(L+3)^2f(x) + 24(L+1)(L+3)frac{1}{1-x}
end{align*}
where $L = xfrac{d}{dx}$ is an operator which first takes the derivative w.r.t $x$, and then multiplies by $x$. This gives us a differential equation for $f$. Solving this, along with the initial condition $p(0) = 64$, we have
begin{align*}
f(x) = frac{4(-8x^2 - 3x^2log(1-x)+13x + 6xlog(1-x) - 3log(1-x))}{(1-x)^2x}
end{align*}
Through $log(1-x) = x + frac{x^2}{2} + frac{x^3}{3} + cdots$ and partial fractions, you will arrive that the $s$th coefficient of the Taylor series expansion at $x = 0$ is indeed $4(s+2)(5s+8)/(s+1)$.
$endgroup$
We can re-express as
begin{align*}
(2s+3)(2s+4)(2s+5)p(s+1) = 8(s+1)(s+3)^2 p(s) + 24(s+1)(s+3)
end{align*}
The generating function for this is
begin{align*}
(2L+3)(2L+4)(2L+5)frac{f(x)-p(0)}{x} = 8(L+1)(L+3)^2f(x) + 24(L+1)(L+3)frac{1}{1-x}
end{align*}
where $L = xfrac{d}{dx}$ is an operator which first takes the derivative w.r.t $x$, and then multiplies by $x$. This gives us a differential equation for $f$. Solving this, along with the initial condition $p(0) = 64$, we have
begin{align*}
f(x) = frac{4(-8x^2 - 3x^2log(1-x)+13x + 6xlog(1-x) - 3log(1-x))}{(1-x)^2x}
end{align*}
Through $log(1-x) = x + frac{x^2}{2} + frac{x^3}{3} + cdots$ and partial fractions, you will arrive that the $s$th coefficient of the Taylor series expansion at $x = 0$ is indeed $4(s+2)(5s+8)/(s+1)$.
edited 4 hours ago
answered 10 hours ago
Tom ChenTom Chen
648312
648312
$begingroup$
Thanks! How did you solve the differential equation?
$endgroup$
– Rohit Pandey
1 hour ago
add a comment |
$begingroup$
Thanks! How did you solve the differential equation?
$endgroup$
– Rohit Pandey
1 hour ago
$begingroup$
Thanks! How did you solve the differential equation?
$endgroup$
– Rohit Pandey
1 hour ago
$begingroup$
Thanks! How did you solve the differential equation?
$endgroup$
– Rohit Pandey
1 hour ago
add a comment |
$begingroup$
HINT:
We are given a first degree Finite Difference equation, linear and non-homogeneous
$$
{{a(s)} over {b(s)}}p(s + 1) + {{c(s)} over {d(s)}}p(s ) = {{f(s)} over {g(s)}}
$$
where all the parameters $a(s), cdots , f(s)$ are polynomials in $s$.
The above equation shall be valid for all real values of $s$, in particular for those for which
$a(s) ne 0$:
let's assume $s$ belongs to such a domain.
We normalize to $1$ the coefficient of $p(s+1)$
$$
p(s + 1) + {{b(s)c(s)} over {a(s)d(s)}}p(s ) = {{b(s)f(s)} over {a(s)g(s)}}
$$
Thereafter, assume also that $s$ (and $s+1$) belongs to a domain in which $b(s)c(s) ne 0$.
So we can divide by the summing factor (analog to the integration factor)
$$
h(s) = left( { - 1} right)^{,s} prodlimits_{k, = ,u}^{s - 1} {{{b(k)c(k)} over {a(k)d(k)}}}
$$
where $k$ ranges in the said domain, to obtain an exact difference
$$
eqalign{
& {{p(s + 1)} over {h(s + 1)}} + {{b(s)c(s)} over {a(s)d(s)}}{{p(s)} over {h(s + 1)}} = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& {{p(s + 1)} over {h(s + 1)}} - {{p(s)} over {h(s)}} = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& Delta left( {{{p(s)} over {h(s)}}} right) = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& {{p(s)} over {h(s)}} = sumlimits_{k = ,v}^{s - 1} {{{b(k)f(k)} over {a(k)g(k)h(k + 1)}}} + C cr}
$$
Then we shall verify the domain of validity of the solution
standing the assumptions made.
$endgroup$
add a comment |
$begingroup$
HINT:
We are given a first degree Finite Difference equation, linear and non-homogeneous
$$
{{a(s)} over {b(s)}}p(s + 1) + {{c(s)} over {d(s)}}p(s ) = {{f(s)} over {g(s)}}
$$
where all the parameters $a(s), cdots , f(s)$ are polynomials in $s$.
The above equation shall be valid for all real values of $s$, in particular for those for which
$a(s) ne 0$:
let's assume $s$ belongs to such a domain.
We normalize to $1$ the coefficient of $p(s+1)$
$$
p(s + 1) + {{b(s)c(s)} over {a(s)d(s)}}p(s ) = {{b(s)f(s)} over {a(s)g(s)}}
$$
Thereafter, assume also that $s$ (and $s+1$) belongs to a domain in which $b(s)c(s) ne 0$.
So we can divide by the summing factor (analog to the integration factor)
$$
h(s) = left( { - 1} right)^{,s} prodlimits_{k, = ,u}^{s - 1} {{{b(k)c(k)} over {a(k)d(k)}}}
$$
where $k$ ranges in the said domain, to obtain an exact difference
$$
eqalign{
& {{p(s + 1)} over {h(s + 1)}} + {{b(s)c(s)} over {a(s)d(s)}}{{p(s)} over {h(s + 1)}} = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& {{p(s + 1)} over {h(s + 1)}} - {{p(s)} over {h(s)}} = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& Delta left( {{{p(s)} over {h(s)}}} right) = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& {{p(s)} over {h(s)}} = sumlimits_{k = ,v}^{s - 1} {{{b(k)f(k)} over {a(k)g(k)h(k + 1)}}} + C cr}
$$
Then we shall verify the domain of validity of the solution
standing the assumptions made.
$endgroup$
add a comment |
$begingroup$
HINT:
We are given a first degree Finite Difference equation, linear and non-homogeneous
$$
{{a(s)} over {b(s)}}p(s + 1) + {{c(s)} over {d(s)}}p(s ) = {{f(s)} over {g(s)}}
$$
where all the parameters $a(s), cdots , f(s)$ are polynomials in $s$.
The above equation shall be valid for all real values of $s$, in particular for those for which
$a(s) ne 0$:
let's assume $s$ belongs to such a domain.
We normalize to $1$ the coefficient of $p(s+1)$
$$
p(s + 1) + {{b(s)c(s)} over {a(s)d(s)}}p(s ) = {{b(s)f(s)} over {a(s)g(s)}}
$$
Thereafter, assume also that $s$ (and $s+1$) belongs to a domain in which $b(s)c(s) ne 0$.
So we can divide by the summing factor (analog to the integration factor)
$$
h(s) = left( { - 1} right)^{,s} prodlimits_{k, = ,u}^{s - 1} {{{b(k)c(k)} over {a(k)d(k)}}}
$$
where $k$ ranges in the said domain, to obtain an exact difference
$$
eqalign{
& {{p(s + 1)} over {h(s + 1)}} + {{b(s)c(s)} over {a(s)d(s)}}{{p(s)} over {h(s + 1)}} = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& {{p(s + 1)} over {h(s + 1)}} - {{p(s)} over {h(s)}} = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& Delta left( {{{p(s)} over {h(s)}}} right) = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& {{p(s)} over {h(s)}} = sumlimits_{k = ,v}^{s - 1} {{{b(k)f(k)} over {a(k)g(k)h(k + 1)}}} + C cr}
$$
Then we shall verify the domain of validity of the solution
standing the assumptions made.
$endgroup$
HINT:
We are given a first degree Finite Difference equation, linear and non-homogeneous
$$
{{a(s)} over {b(s)}}p(s + 1) + {{c(s)} over {d(s)}}p(s ) = {{f(s)} over {g(s)}}
$$
where all the parameters $a(s), cdots , f(s)$ are polynomials in $s$.
The above equation shall be valid for all real values of $s$, in particular for those for which
$a(s) ne 0$:
let's assume $s$ belongs to such a domain.
We normalize to $1$ the coefficient of $p(s+1)$
$$
p(s + 1) + {{b(s)c(s)} over {a(s)d(s)}}p(s ) = {{b(s)f(s)} over {a(s)g(s)}}
$$
Thereafter, assume also that $s$ (and $s+1$) belongs to a domain in which $b(s)c(s) ne 0$.
So we can divide by the summing factor (analog to the integration factor)
$$
h(s) = left( { - 1} right)^{,s} prodlimits_{k, = ,u}^{s - 1} {{{b(k)c(k)} over {a(k)d(k)}}}
$$
where $k$ ranges in the said domain, to obtain an exact difference
$$
eqalign{
& {{p(s + 1)} over {h(s + 1)}} + {{b(s)c(s)} over {a(s)d(s)}}{{p(s)} over {h(s + 1)}} = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& {{p(s + 1)} over {h(s + 1)}} - {{p(s)} over {h(s)}} = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& Delta left( {{{p(s)} over {h(s)}}} right) = {{b(s)f(s)} over {a(s)g(s)h(s + 1)}} cr
& {{p(s)} over {h(s)}} = sumlimits_{k = ,v}^{s - 1} {{{b(k)f(k)} over {a(k)g(k)h(k + 1)}}} + C cr}
$$
Then we shall verify the domain of validity of the solution
standing the assumptions made.
edited 9 hours ago
answered 12 hours ago
G CabG Cab
18.4k31237
18.4k31237
add a comment |
add a comment |
$begingroup$
This only a partial answer. But it's a general strategy to solve things like this.
Note that if $p(s)$ satisfies your identity for all $s ge 0$, then it must also work for all $s in mathbb N$. Then we have a recursion which we can solve using either Mathematica or Maple. For example, in Mathematica you can use
Solve[-p[s] + ((3 + 2 s) (4 + 2 s) (5 + 2 s) p[1 + s])/(8 (1 + s) (3 + s)^2) == 3/(3 + s),p[s],s]
The result is
$$
-frac{3 (2 s+3) (2 s+5) , _3F_2left(2,s+frac{5}{2},s+frac{7}{2};s+4,s+5;1right)}{4 (s+1) (s+3)^2 (s+4)}+frac{(s+2) left(3 pi c_1+512 s-192 pi +1152right) Gamma (s+1) Gamma (s+3)}{32 Gamma left(s+frac{3}{2}right) Gamma left(s+frac{5}{2}right)}-4 (s+2) (4 s+7),
$$
where $c_1$ is constant decided by the value of $p(0)$.
I say this is a partial answer because so far I have not been able to simplify this to
$$
4frac{(s+2)(5s+8)}{(s+1)}.
$$
But this should be true by taking $c_1=64$. I just do not know enough hypergeometric function to prove it.
$endgroup$
$begingroup$
Thanks, interesting direction. Will see if I can pursue it further. +1
$endgroup$
– Rohit Pandey
Jan 15 at 2:21
add a comment |
$begingroup$
This only a partial answer. But it's a general strategy to solve things like this.
Note that if $p(s)$ satisfies your identity for all $s ge 0$, then it must also work for all $s in mathbb N$. Then we have a recursion which we can solve using either Mathematica or Maple. For example, in Mathematica you can use
Solve[-p[s] + ((3 + 2 s) (4 + 2 s) (5 + 2 s) p[1 + s])/(8 (1 + s) (3 + s)^2) == 3/(3 + s),p[s],s]
The result is
$$
-frac{3 (2 s+3) (2 s+5) , _3F_2left(2,s+frac{5}{2},s+frac{7}{2};s+4,s+5;1right)}{4 (s+1) (s+3)^2 (s+4)}+frac{(s+2) left(3 pi c_1+512 s-192 pi +1152right) Gamma (s+1) Gamma (s+3)}{32 Gamma left(s+frac{3}{2}right) Gamma left(s+frac{5}{2}right)}-4 (s+2) (4 s+7),
$$
where $c_1$ is constant decided by the value of $p(0)$.
I say this is a partial answer because so far I have not been able to simplify this to
$$
4frac{(s+2)(5s+8)}{(s+1)}.
$$
But this should be true by taking $c_1=64$. I just do not know enough hypergeometric function to prove it.
$endgroup$
$begingroup$
Thanks, interesting direction. Will see if I can pursue it further. +1
$endgroup$
– Rohit Pandey
Jan 15 at 2:21
add a comment |
$begingroup$
This only a partial answer. But it's a general strategy to solve things like this.
Note that if $p(s)$ satisfies your identity for all $s ge 0$, then it must also work for all $s in mathbb N$. Then we have a recursion which we can solve using either Mathematica or Maple. For example, in Mathematica you can use
Solve[-p[s] + ((3 + 2 s) (4 + 2 s) (5 + 2 s) p[1 + s])/(8 (1 + s) (3 + s)^2) == 3/(3 + s),p[s],s]
The result is
$$
-frac{3 (2 s+3) (2 s+5) , _3F_2left(2,s+frac{5}{2},s+frac{7}{2};s+4,s+5;1right)}{4 (s+1) (s+3)^2 (s+4)}+frac{(s+2) left(3 pi c_1+512 s-192 pi +1152right) Gamma (s+1) Gamma (s+3)}{32 Gamma left(s+frac{3}{2}right) Gamma left(s+frac{5}{2}right)}-4 (s+2) (4 s+7),
$$
where $c_1$ is constant decided by the value of $p(0)$.
I say this is a partial answer because so far I have not been able to simplify this to
$$
4frac{(s+2)(5s+8)}{(s+1)}.
$$
But this should be true by taking $c_1=64$. I just do not know enough hypergeometric function to prove it.
$endgroup$
This only a partial answer. But it's a general strategy to solve things like this.
Note that if $p(s)$ satisfies your identity for all $s ge 0$, then it must also work for all $s in mathbb N$. Then we have a recursion which we can solve using either Mathematica or Maple. For example, in Mathematica you can use
Solve[-p[s] + ((3 + 2 s) (4 + 2 s) (5 + 2 s) p[1 + s])/(8 (1 + s) (3 + s)^2) == 3/(3 + s),p[s],s]
The result is
$$
-frac{3 (2 s+3) (2 s+5) , _3F_2left(2,s+frac{5}{2},s+frac{7}{2};s+4,s+5;1right)}{4 (s+1) (s+3)^2 (s+4)}+frac{(s+2) left(3 pi c_1+512 s-192 pi +1152right) Gamma (s+1) Gamma (s+3)}{32 Gamma left(s+frac{3}{2}right) Gamma left(s+frac{5}{2}right)}-4 (s+2) (4 s+7),
$$
where $c_1$ is constant decided by the value of $p(0)$.
I say this is a partial answer because so far I have not been able to simplify this to
$$
4frac{(s+2)(5s+8)}{(s+1)}.
$$
But this should be true by taking $c_1=64$. I just do not know enough hypergeometric function to prove it.
answered Jan 15 at 0:15
ablmfablmf
2,49642352
2,49642352
$begingroup$
Thanks, interesting direction. Will see if I can pursue it further. +1
$endgroup$
– Rohit Pandey
Jan 15 at 2:21
add a comment |
$begingroup$
Thanks, interesting direction. Will see if I can pursue it further. +1
$endgroup$
– Rohit Pandey
Jan 15 at 2:21
$begingroup$
Thanks, interesting direction. Will see if I can pursue it further. +1
$endgroup$
– Rohit Pandey
Jan 15 at 2:21
$begingroup$
Thanks, interesting direction. Will see if I can pursue it further. +1
$endgroup$
– Rohit Pandey
Jan 15 at 2:21
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3067081%2ffinding-rational-polynomial-functions-that-satisfy-a-certain-equation-to-help-so%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown