Relationship between R-squared and p-value in a regression
There is no established association/relationship between p-value and R-square. This all depends on the data (i.e.; contextual).
R-square value tells you how much variation is explained by your model. So 0.1 R-square means that your model explains 10% of variation within the data. The greater R-square the better the model. Whereas p-value tells you about the F statistic hypothesis testing of the “fit of the intercept-only model and your model are equal”. So if the p-value is less than the significance level (usually 0.05) then your model fits the data well.
Thus you have four scenarios:
1. low R-square and low p-value (p-value <= 0.05)
It means that your model doesn’t explain much of variation of the data but it is significant (better than not having a model)
2. low R-square and high p-value (p-value > 0.05)
It means that your model doesn’t explain much of variation of the data and it is not significant (worst scenario)
3. high R-square and low p-value
It means your model explains a lot of variation within the data and is significant (best scenario)
4. high R-square and high p-value
It means that your model explains a lot of variation within the data but is not significant (model is worthless)