Only the A and B problems could be solved, and it turned out that the B problem was a lie solution method later, and when I tried to upsolve the C problem, only some cases became WA and I had a lot of trouble. Since the C problem is a greedy algorithm, mistakes can exist, but I still have a hard time not knowing what is wrong.
(It was difficult, but I solved it by the longest path according to the solution method. For details, see the explanation of C problem below.)
Consider whether it matches in two parts. At this time, it is good to think that the total number selected by making a certain selection should be exactly 1/2 of the total.
Therefore, if it is determined whether any of $ a + b, b + c, c + d, d + a, a + c, b + d, a, b, c, d $ is exactly halved. Is good. Also, you don't have to think when choosing three numbers.
A.py
a,b,c,d=map(int,input().split())
cand=[a+b,b+c,c+d,d+a,a+c,b+d,a,b,c,d]
s=sum([a,b,c,d])
if s%2==1:
print("No")
elif s//2 in cand:
print("Yes")
else:
print("No")
Simulate honestly. Also, since there is only one number left in the end and the same operation is performed for the same number, save the number with set. Aligned sets are convenient because they choose the maximum and minimum numbers.
The test case was weakly passed during the contest, but this solution is a lie solution. For example, when it becomes $ 1 \ 10 ^ 9 $, it will be inserted into the set $ 10 ^ 9 $ times. Therefore, the correct answer is to realize that this operation is Euclidean algorithm and take the whole gcd. Also, when simulating (maximum value)-$ x \ times $ (minimum value) > 0, you can simulate only the maximum $ x $, so if you do this, it will probably be faster. Should work.
B.cc
//Debugging options:-fsanitize=undefined,address
//Compiler optimization
#pragma GCC optimize("Ofast")
//Include etc.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
//macro
//for loop
//The argument is(Variables in the loop,Range of movement)Or(Variables in the loop,First number,Number of ends)、のどちらOr
//If there is no D, the loop variable is incremented by 1, and if it is with D, the loop variable is decremented by 1.
//FORA is a range for statement(If it's hard to use, erase it)
#define REP(i,n) for(ll i=0;i<ll(n);i++)
#define REPD(i,n) for(ll i=n-1;i>=0;i--)
#define FOR(i,a,b) for(ll i=a;i<=ll(b);i++)
#define FORD(i,a,b) for(ll i=a;i>=ll(b);i--)
#define FORA(i,I) for(const auto& i:I)
//x is a container such as vector
#define ALL(x) x.begin(),x.end()
#define SIZE(x) ll(x.size())
//constant
#define INF 1000000000000 //10^12:∞
#define MOD 1000000007 //10^9+7:Congruence law
#define MAXR 100000 //10^5:The largest range in the array
//Abbreviation
#define PB push_back //Insert
#define MP make_pair //pair constructor
#define F first //The first element of pair
#define S second //The second element of pair
signed main(){
//Code for speeding up input
//ios::sync_with_stdio(false);
//cin.tie(nullptr);
ll n;cin>>n;
set<ll> ans;
REP(i,n){
ll a;cin>>a;
ans.insert(a);
}
while(SIZE(ans)!=1){
ll l,r;l=*ans.begin();r=*--ans.end();
ans.erase(r);
ans.insert(r-l);
}
cout<<*ans.begin()<<endl;
}
I couldn't do the C problem by trying to AC with the greedy method, so I wrote the program by incorporating the longest path of the explanation. Although it is heavier in terms of calculation than the explanation, we added pruning to speed it up.
First, try any camel order and it's $ N! $. In addition, for each order, it is $ O (M \ times N ^ 2) $ to find the distance between camels needed to cross each part. Therefore, the amount of calculation is $ O (N! \ Times M \ times N ^ 2) $, which is too late, so ** speed up by pruning well ** (I gave up during the contest). However, up to ** $ 10 ^ {11} $, the non-assumed solution method by increasing the speed by a constant factor may pass **, so do not give up.)
First, the above policy is
① Determine the order of camels (just use $ next $ \ _ $ permutation $ (reference)) (2) Start with a camel with each part ($ M $ street) and find out which camel ($ N-1 $ street) cannot be loaded.
It will be in the order.
Also, if you use the ** greedy method in ②, you will step on the corner case and become WA **. Here, by starting with a camel and considering the information from which camel it cannot be placed (must be separated by the length of the part) as an edge **, ** the first camel to the last camel. It would be nice if we could reduce the problem of finding the longest path to.
Therefore, the DP with $ dp [i]: = $ (the longest distance from the first camel to the $ i $ th camel) should be performed in the do-while statement of $ next $ \ _ $ permutation $. .. When a certain part is selected, it is regarded as a transition, but details are omitted here.
(Hereafter, the speed will be increased by a constant factor.) (Sure, you can get through just by speeding up to reduce the first part.)
When $ (a, b), (c, d) $ that manages parts by (length, weight) and becomes $ a \ geqq c $ and $ b \ leqq d $ exists, $ (a, b) If $ is satisfied, $ (c, d) $ also holds, so ** you can reduce the number of parts so that the size of the length and the size of the weight match between any parts **. Therefore, arrange the parts in ascending order of length, look at the one with the largest length and the one with the smallest length, and if the above holds, check that the smaller one is not used. Since this is done for any part, the total complexity is $ O (M ^ 2) $. Also, ** once checked parts do not need to be searched, so pruning ** can be done.
Second, ** the heaviest camel is better to put on the edge if it's not on the edge ** (not proven), so you can fix the last camel with the heaviest camel. You only have to think about the order of camels on $ (N-1)! $ Street.
Also, the parts are arranged in ascending order of weight ($ \ leftrightarrow $ ascending order of length), so when one part cannot be placed from a camel, the camel on which the next part cannot be placed will be after that camel. Therefore, you can save the amount of calculation ** like the scale method **.
By the above constant times speedup, $ O (N! \ Times M \ times N ^ 2) $ is changed to $ O (M ^ 2 + (N-1)! \ Times N \ times (M + N)) $. You can drop it. Considering the worst case $ N = 10, M = 100000 $, it can be dropped to about $ 10 ^ {11} $ → $ 10 ^ {10} $. I also pruned for $ 10 ^ {10} $, so I was able to drop it to 26ms (fastest as of October 13, 2020). If the speed can be increased to this level, I feel that even if the worst case is included, it will be possible to pass through.
C_fastest.cc
#pragma GCC optimize("Ofast")
#include<bits/stdc++.h>
using namespace std;
#define REP(i,n) for(int i=0;i<int(n);i++)
#define REPD(i,n) for(int i=n-1;i>=0;i--)
#define SIZE(x) int(x.size())
#define INF 2000000000
#define PB push_back
#define F first
#define S second
int w[8];
pair<int,int> partsx[100000];
bool info[100000];
int n,m;
vector<pair<int,int>> parts;
inline int read(){
int x=0,w=0;char ch=0;
while(!isdigit(ch)){w|=ch=='-';ch=getchar();}
while(isdigit(ch)){x=(x<<1)+(x<<3)+(ch^48);ch=getchar();}
return w?-x:x;
}
signed main(){
n=read();m=read();
REP(i,n)w[i]=read();
sort(w,w+n);
REP(i,m){partsx[i].F=read();partsx[i].S=read();}
sort(partsx,partsx+m);
int ma=INF;
REP(i,m)ma=min(ma,partsx[i].S);
if(ma<w[n-1]){
cout<<-1<<endl;
return 0;
}
REP(i,m)info[i]=true;
REPD(i,m){
if(!info[i])continue;
REP(j,i){
if(partsx[i].S<=partsx[j].S){
info[j]=false;
}
}
}
REP(i,m)if(info[i])parts.PB(partsx[i]);
m=SIZE(parts);
int ans=INF;
do{
vector<int> dp(n,0);
REP(j,n-1){
int k=j;int check=w[k];
REP(i,m){
while(k!=n){
if(parts[i].S<check){
dp[k]=max(dp[j]+parts[i].F,dp[k]);
break;
}
k++;
check+=w[k];
}
if(k==n){
dp[n-1]=max(dp[j],dp[n-1]);
break;
}
}
}
ans=min(ans,dp[n-1]);
}while(next_permutation(w,w+n-1));
cout<<ans<<endl;
}
Recommended Posts