POJ1502 MPI Maelstrom【最短路】
MPI Maelstrom
Time Limit: 1000MS | Memory Limit: 10000K | |
Total Submissions: 12305 | Accepted: 7597 |
Description
BIT has recently taken delivery of their new supercomputer, a 32 processor Apollo Odyssey distributed shared memory machine with a hierarchical communication subsystem. Valentine McKee's research advisor, Jack Swigert, has asked her to benchmark the new system.
``Since the Apollo is a distributed shared memory machine, memory access and communication times are not uniform,'' Valentine told Swigert. ``Communication is fast between processors that share the same memory subsystem, but it is slower between processors that are not on the same subsystem. Communication between the Apollo and machines in our lab is slower yet.''
``How is Apollo's port of the Message Passing Interface (MPI) working out?'' Swigert asked.
``Not so well,'' Valentine replied. ``To do a broadcast of a message from one processor to all the other n-1 processors, they just do a sequence of n-1 sends. That really serializes things and kills the performance.''
``Is there anything you can do to fix that?''
``Yes,'' smiled Valentine. ``There is. Once the first processor has sent the message to another, those two can then send messages to two other hosts at the same time. Then there will be four hosts that can send, and so on.''
``Ah, so you can do the broadcast as a binary tree!''
``Not really a binary tree -- there are some particular features of our network that we should exploit. The interface cards we have allow each processor to simultaneously send messages to any number of the other processors connected to it. However, the messages don't necessarily arrive at the destinations at the same time -- there is a communication cost involved. In general, we need to take into account the communication costs for each link in our network topologies and plan accordingly to minimize the total time required to do a broadcast.''
Input
The input will describe the topology of a network connecting n processors. The first line of the input will be n, the number of processors, such that 1 <= n <= 100.
The rest of the input defines an adjacency matrix, A. The adjacency matrix is square and of size n x n. Each of its entries will be either an integer or the character x. The value of A(i,j) indicates the expense of sending a message directly from node i to node j. A value of x for A(i,j) indicates that a message cannot be sent directly from node i to node j.
Note that for a node to send a message to itself does not require network communication, so A(i,i) = 0 for 1 <= i <= n. Also, you may assume that the network is undirected (messages can go in either direction with equal overhead), so that A(i,j) = A(j,i). Thus only the entries on the (strictly) lower triangular portion of A will be supplied.
The input to your program will be the lower triangular section of A. That is, the second line of input will contain one entry, A(2,1). The next line will contain two entries, A(3,1) and A(3,2), and so on.
Output
Your program should output the minimum communication time required to broadcast a message from the first processor to all the other processors.
Sample Input
5
50
30 5
100 20 50
10 x x 10
Sample Output
35
Source
East Central North America 1996
問題描述:有n個程式(從1開始編號),一個程式可以同時給和自己相連的程式傳送訊息,問2~n-1號程式收到1號程式傳送的訊息的時間。(當x號程式收到1號程式傳送的訊息後,x程式也可以傳遞收到的訊息)
解題思路:求源點到各個結點的最短路徑的最大值,可使用Dijkstra演算法
AC的C++程式:
#include<iostream>
#include<string>
#include<cstring>
#include<queue>
using namespace std;
const int INF=0x3f3f3f3f;
const int N=105;
int g[N][N];//圖的鄰接矩陣
bool vis[N];//標記結點的最短路徑是否為求出
int dist[N];//記錄結點的最短路徑
struct Node{
int u,w;
Node(){}
Node(int u,int w):u(u),w(w){}
bool operator<(const Node &a)const
{
return w>a.w;
}
};
void dijkstra(int n,int s)
{
priority_queue<Node>q;
memset(vis,false,sizeof(vis));
memset(dist,INF,sizeof(dist));
q.push(Node(s,0));
dist[s]=0;
while(!q.empty()){
Node f=q.top();
q.pop();
int u=f.u;
if(!vis[u]){
vis[u]=true;
for(int i=1;i<=n;i++)
if(g[u][i]!=INF&&!vis[i]){
if(dist[i]>dist[u]+g[u][i]){
dist[i]=dist[u]+g[u][i];
q.push(Node(i,dist[i]));
}
}
}
}
}
int stringtoi(string s)
{
int ans=0;
for(int i=0;i<s.length();i++)
ans=ans*10+s[i]-'0';
return ans;
}
int main()
{
int n;
cin>>n;
string s;
memset(g,INF,sizeof(g));
for(int i=2;i<=n;i++)
for(int j=1;j<i;j++){
cin>>s;
if(s=="x")
continue;
g[i][j]=g[j][i]=stringtoi(s);
}
dijkstra(n,1);
int ans=0;
for(int i=1;i<=n;i++)
if(ans<dist[i])
ans=dist[i];
cout<<ans<<endl;
return 0;
}
相關文章
- 最短路:求最長最短路
- 最短路 || 最長路 || 次短路
- 程序間通訊MPI
- mpi4py安裝報錯error: Cannot compile MPI programs. Check your configuration!!!ErrorCompile
- RK3399_Android7.1_MPI_Demo_說 明Android
- 最短路
- 次短路
- 最大值(最短路+最短路計數)
- 2024_4_22 路徑花費為最長$k$條邊之和最短路
- 最短路-Floyd
- CLion和WSL配置MPI執行及除錯環境除錯
- 使用MPI並行化遺傳演算法框架GAFT並行演算法框架
- MPI-3 中大的計數及相關函式函式
- 最短路專項
- 擴點最短路
- 最短路之Dijkstra
- 最短路圖論圖論
- 6.4.2最短路徑
- 圖 - 最短路徑
- JavaScript短路表示式JavaScript
- Small Multiple(最短路)
- hdu 4370(最短路)
- 單源最短路徑:最短路徑性質的證明
- HDU - 3790 (雙標準限制最短路徑)最短路徑問題
- MPI矩陣向量乘法程式碼《並行程式設計導論》矩陣並行行程程式設計
- 最短路演算法演算法
- Johnson全源最短路
- 分層圖最短路
- 20240624總結(最短路)
- 最短路徑問題
- Johnson 全源最短路
- 圖論-Dijkstra最短路圖論
- 短路:五維現象
- HDFS短路讀詳解
- 單源最短路徑
- distance(Floyd求最短路)
- 封印之門(最短路)
- 淺談同餘最短路