2000
Ahrens, James; Law, Charles; Schroeder, Will; Martin, Ken; Papka, Michael
A Parallel Approach for Efficiently Visualizing Extremely Large, Time-Varying Datasets. Technical Report
2000, (LA-UR-00-1620).
Abstract | Links | BibTeX | Tags: large datasets, visualization
@techreport{info:lanl-repo/lareport/LA-UR-00-1620,
title = {A Parallel Approach for Efficiently Visualizing Extremely Large, Time-Varying Datasets.},
author = {James Ahrens and Charles Law and Will Schroeder and Ken Martin and Michael Papka},
url = {http://datascience.dsscale.org/wp-content/uploads/2017/09/LA-UR-00-1620.pdf},
year = {2000},
date = {2000-01-01},
abstract = {A significant unsolved problem in scientific visualization is how to efficiently visualize extremely large time-varying datasets. Using parallelism provides a promising solution. One drawback of this approach is the high overhead and specialized knowledge often required to create parallel visualization programs. In this paper, we present a parallel visualization system that is scalable, portable and encapsulates parallel programming details for its users. Our approach was to augment an existing visualization system, the visualization toolkit(VTK). Process and communication abstractions were added in order to support task, pipeline and data parallelism. The resulting system allows users to quickly write parallel visualization programs and avoid rewriting these programs when porting to new platforms. The performance of a collection of parallel visualization programs written using this system and run on both a cluster of SGI Origin 2000s and a Linux-based PC cluster is presented. In addition to showing the utility of our approach, the results offer a comparison of the performance of commodity-based computing clusters.},
howpublished = {IEEE/VISUALIZATION CONF. ; 200010 ; SALT LAKE CITY},
note = {LA-UR-00-1620},
keywords = {large datasets, visualization},
pubstate = {published},
tppubtype = {techreport}
}
A significant unsolved problem in scientific visualization is how to efficiently visualize extremely large time-varying datasets. Using parallelism provides a promising solution. One drawback of this approach is the high overhead and specialized knowledge often required to create parallel visualization programs. In this paper, we present a parallel visualization system that is scalable, portable and encapsulates parallel programming details for its users. Our approach was to augment an existing visualization system, the visualization toolkit(VTK). Process and communication abstractions were added in order to support task, pipeline and data parallelism. The resulting system allows users to quickly write parallel visualization programs and avoid rewriting these programs when porting to new platforms. The performance of a collection of parallel visualization programs written using this system and run on both a cluster of SGI Origin 2000s and a Linux-based PC cluster is presented. In addition to showing the utility of our approach, the results offer a comparison of the performance of commodity-based computing clusters.
: . .
1.
Ahrens, James; Law, Charles; Schroeder, Will; Martin, Ken; Papka, Michael
A Parallel Approach for Efficiently Visualizing Extremely Large, Time-Varying Datasets. Technical Report
2000, (LA-UR-00-1620).
@techreport{info:lanl-repo/lareport/LA-UR-00-1620,
title = {A Parallel Approach for Efficiently Visualizing Extremely Large, Time-Varying Datasets.},
author = {James Ahrens and Charles Law and Will Schroeder and Ken Martin and Michael Papka},
url = {http://datascience.dsscale.org/wp-content/uploads/2017/09/LA-UR-00-1620.pdf},
year = {2000},
date = {2000-01-01},
abstract = {A significant unsolved problem in scientific visualization is how to efficiently visualize extremely large time-varying datasets. Using parallelism provides a promising solution. One drawback of this approach is the high overhead and specialized knowledge often required to create parallel visualization programs. In this paper, we present a parallel visualization system that is scalable, portable and encapsulates parallel programming details for its users. Our approach was to augment an existing visualization system, the visualization toolkit(VTK). Process and communication abstractions were added in order to support task, pipeline and data parallelism. The resulting system allows users to quickly write parallel visualization programs and avoid rewriting these programs when porting to new platforms. The performance of a collection of parallel visualization programs written using this system and run on both a cluster of SGI Origin 2000s and a Linux-based PC cluster is presented. In addition to showing the utility of our approach, the results offer a comparison of the performance of commodity-based computing clusters.},
howpublished = {IEEE/VISUALIZATION CONF. ; 200010 ; SALT LAKE CITY},
note = {LA-UR-00-1620},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
A significant unsolved problem in scientific visualization is how to efficiently visualize extremely large time-varying datasets. Using parallelism provides a promising solution. One drawback of this approach is the high overhead and specialized knowledge often required to create parallel visualization programs. In this paper, we present a parallel visualization system that is scalable, portable and encapsulates parallel programming details for its users. Our approach was to augment an existing visualization system, the visualization toolkit(VTK). Process and communication abstractions were added in order to support task, pipeline and data parallelism. The resulting system allows users to quickly write parallel visualization programs and avoid rewriting these programs when porting to new platforms. The performance of a collection of parallel visualization programs written using this system and run on both a cluster of SGI Origin 2000s and a Linux-based PC cluster is presented. In addition to showing the utility of our approach, the results offer a comparison of the performance of commodity-based computing clusters.