<!-Competitive professional devotion template->

Recently, I feel that there are many greedy algorithms. I'm not good at it, so I want to get used to it. It would be great if we could see the rate increase soon ...

50 minutes I thought it would pass if I pushed $ O (N ^ 3) $, but I couldn't do it and melted it for an additional 20 minutes.

To think of a string that appears more than once, it is sufficient for a string to appear at least twice, so put those two strings in $ s_1 $, $ s_2 $ and below.

First, I doubt DP because it is a substring. However, I got lost trying to process the two strings that ** do not overlap ** and ** think within the same string ** at the same time, and divided them into those before the $ i $ character and those after the $ i + 1 $ character. I thought, but this was a bad move.

During my consideration, I focused on ** continuity ** and thought that it could be solved by a straightforward method similar to the scale method **. That is, the first character of $ s_1 $ is the $ i $ character ($ N $ street), and the first character of $ s_2 $ is the $ j (i + 1 \ leqq j \ leqq N) $ character ($ Ni $). Consider the longest string in which $ s_1 $ and $ s_2 $ match (up to $ N $ street). In this case, it will be $ O (N ^ 3) $, so it will not be in time. (It's no good that the discussion so far took nearly 30 minutes.)

Therefore, in order to calculate this method efficiently, ** consider whether there is any wasteful investigation **. Then, you can see that ** the same part is calculated repeatedly in the part that determines whether the continuous parts match **. Therefore, ** you can improve efficiency by calculating the back part first **, so you can prepare the following array and perform DP.

$ dp [i] [j]: = $ (The longest common prefix when $ s_1 $ starts with the $ i $ character and $ s_2 $ starts with the $ j $ character, but when $ i = 0 $ And $ s_1 $ and $ s_2 $ do not overlap, including when $ j = 0 $)

Here, the case classification of DP transition when calculating from the latter part is as follows, and the maximum value in dp is the answer by implementing this.

(1) When ($ i $ character of $ s $) $ = $ ($ j $ character of $ s $)

(2) When ($ i $ character of $ s $) $ \ neq $ ($ j $ character of $ s $)

(✳︎) $ dp [i] [j] $ can only be $ j-i $ at most If it exceeds that, $ dp [i] [j] = 0 $ may be set.

(1), (2), (✳︎) can be shown respectively, but they are omitted here.

・ What is the longest common prefix? This is the longest common prefix of the strings s and t. The longest common prefix when s = ABCBC, t = ABD is AB, and the longest common prefix when s = ABC, t = BCD is an empty string.

・ Knowledge obtained this time The problem of the pattern of cutting $ N $ by about $ O (N ^ 3) $ → $ O (N ^ 2) $ is ** If you pay attention to whether there is a place where you are repeatedly calculating wastefully, it is calculated by DP or precalculation I can reduce the amount **, so I decided to order it. Also, it seems that this problem can be solved in various ways by using various string algorithms.

`abc141.py`

```
n=int(input())
s=input()
dp=[[0]*(n+1) for i in range(n+1)]
for i in range(n-1,-1,-1):
for j in range(n-1,i,-1):
if s[j]==s[i] and dp[i+1][j+1]+1<=j-i:
dp[i][j]=dp[i+1][j+1]+1
ans=0
for i in range(n):
for j in range(n):
ans=max(ans,dp[i][j])
print(ans)
```

I thought about 30 minutes and gave up.

I was trying to solve it with a greedy algorithm that could not be clearly proved. I couldn't try a typical solution. (It shouldn't be that difficult ...)

First, since there are two sets of blue points and red points, here we consider the optimal red point ** when the blue point is fixed ($ \ because $ the red point is fixed). Choosing the best blue dot for the time is essentially the same.)

Here, considering that only a few red points can be selected for blue points with a small x-coordinate, it is better to select such points first ($ \ because $ ** x-coordinate and y at the same time. Since it is difficult to consider the coordinates, only one ** will be considered first.) Therefore, the blue dots are selected in ascending order of x coordinates, and the optimum red dot is selected for each ($ \ because $ ** The order of selection is important in the greedy algorithm !! **).

Under the above, the optimum point is ** the point with the largest y-coordinate (regardless of the x-coordinate) among the red points that can be selected **. This is because the blue point with the smallest x-coordinate is selected in order, so the x-coordinate of the red point that can be selected now is always smaller than the blue point selected after that, and the y-coordinate should be as small as possible.

From the above consideration, by saving the red point in the descending order of the y coordinate and the blue point in the ascending order of the x coordinate, it can be implemented by turning the for loop only twice, and $ O (N ^ 2). It's a solution to $ (it can be dropped to $ O (N \ log {N}) $, but it's a bit cumbersome to implement).

(It seems that we can reduce the problem of maximum matching of bipartite graphs, so I hope we can tackle it that way someday.)

`abc91c.py`

```
n=int(input())
red=sorted([list(map(int,input().split())) for i in range(n)],key=lambda x:x[1],reverse=True)
blue=sorted([list(map(int,input().split())) for i in range(n)])
ans=0
for bx,by in blue:
for rx,ry in red:
if rx<bx and ry<by:
ans+=1
red.remove([rx,ry])
break
print(ans)
```

Solved Acing Programming Contest 2020-E Camel Train. This article explains ・

27 minutes

First, the combination that minimizes the maximum value of the product is a combination of digestion costs arranged in ascending order and difficulty in eating in descending order. This can be shown as ** no benefit when rearranging the optimal combination ** (or ** benefit when rearranged the non-optimal combination **) (see [ARMERIA for more information] Article](see https://betrue12.hateblo.jp/entry/2019/10/30/011052). The following is a brief explanation.

(Hereafter, in this combination, the $ i $ th digestion cost from the front is represented by $ a [i] $, and the $ i $ th from the front is represented by $ f [i] $.)

I made this combination in the initial state and tried to simulate ** $ K $ times ** using priority_queue from there **, but $ K $ is as large as $ 10 ^ {18} $, so it is in time. not.

Now, when I was thinking about reducing $ K $ all at once instead of individually, I assumed that ** each product could be less than or equal to a certain value of $ x $ **. Then, if the combination of $ (f [i], a [i]) $ is $ a [i] \ rightarrow [\ frac {x} {f [i]}] $, the product is the minimum number of trainings. Can be less than $ x $ ($ a [i] \ leqq [\ frac {x} {f [i]}] $, the product is already less than $ x $, so the number of trainings is 0 ). Therefore, when the sum of the minimum number of trainings for all is calculated by $ O (N) $, it is sufficient if it is less than $ K $.

Here, the minimum value ($ x ^ {'} $) ** of the ** $ x $ can be found, and $ x $ smaller than $ x ^ {'} $ does not satisfy the theme and $ x ^ You can easily show ** monotonicity ** that it is not satisfied when $ x $ is more than {'} $, so you can look up this $ x ^ {'} $ by binary search. (** You can think of it from the fact that $ x $ can be as large as $ a_ {max} \ times f_ {max} $ due to the constraints **.)

From the above, the total amount of calculation is $ O (N \ log (a_ {max} \ times f_ {max})) $. Also, if I implemented it normally, it wouldn't work in Python, so I used PyPy.

`abc144e.py`

```
n,k=map(int,input().split())
a=sorted(list(map(int,input().split())))
#I can't change this
f=sorted(list(map(int,input().split())),reverse=True)
l,r=0,10**12
def cal(x):
global n,f,a,k
ret=0
for i in range(n):
ret+=max(0,a[i]-(x//f[i]))
return ret
while l+1<r:
x=l+(r-l)//2
if cal(x)<=k:
r=x
else:
l=x
if cal(l)<=k:
print(l)
else:
print(r)
```

Recommended Posts